I have a file.sh with this, when run show : TERM environment variable not set.
smbmount //172.16.44.9/APPS/Interfas/HERRAM/sc5 /mnt/siscont5 -o
iocharset=utf8,username=backup,password=backup2011,r
if [ -f /mnt/siscont5/HER.TXT ]; then
echo "No puedo actualizar ahora"
umount /mnt/siscont5
else
if [ ! -f /home/emni/siscont5/S5.TXT ]; then
echo "Puedo actualizar... "
touch /home/emni/siscont5/HER.TXT
touch /mnt/siscont5/SC5.TXT
mv -f /home/emni/siscont5/CCORPOSD.DBF /mnt/siscont5
mv -f /home/emni/siscont5/CCTRASD.DBF /mnt/siscont5
rm /mnt/siscont5/SC5.TXT
rm /home/emni/siscont5/HER.TXT
echo "La actualizacion ha sido realizada..."
else
echo "No puedo actualizar ahora: Interfaz exportando..."
fi
fi
umount /mnt/siscont5
echo "/mnt/siscont5 desmontada..."
You can see if it's really not set. Run the command set | grep TERM.
If not, you can set it like that:
export TERM=xterm
Using a terminal command i.e. "clear", in a script called from cron (no terminal) will trigger this error message. In your particular script, the smbmount command expects a terminal in which case the work-arounds above are appropriate.
You've answered the question with this statement:
Cron calls this .sh every 2 minutes
Cron does not run in a terminal, so why would you expect one to be set?
The most common reason for getting this error message is because the script attempts to source the user's .profile which does not check that it's running in a terminal before doing something tty related. Workarounds include using a shebang line like:
#!/bin/bash -p
Which causes the sourcing of system-level profile scripts which (one hopes) does not attempt to do anything too silly and will have guards around code that depends on being run from a terminal.
If this is the entirety of the script, then the TERM error is coming from something other than the plain content of the script.
You can replace :
export TERM=xterm
with :
export TERM=linux
It works even in kernel with virgin system.
SOLVED: On Debian 10 by adding "EXPORT TERM=xterm" on the Script executed by CRONTAB (root) but executed as www-data.
$ crontab -e
*/15 * * * * /bin/su - www-data -s /bin/bash -c '/usr/local/bin/todos.sh'
FILE=/usr/local/bin/todos.sh
#!/bin/bash -p
export TERM=xterm && cd /var/www/dokuwiki/data/pages && clear && grep -r -h '|(TO-DO)' > /var/www/todos.txt && chmod 664 /var/www/todos.txt && chown www-data:www-data /var/www/todos.txt
If you are using the Docker PowerShell image set the environment variable for the terminal like this with the -e flag
docker run -i -e "TERM=xterm" mcr.microsoft.com/powershell
Related
I've written a script that takes, as an argument, a string that is a concatenation of a username and a project. The script is supposed to switch (su) to the username, cd to a specific directory based upon the project string.
I basically want to do:
su $USERNAME;
cd /home/$USERNAME/$PROJECT;
svn update;
The problem is that once I do an su... it just waits there. Which makes sense since the flow of execution has passed to switching to the user. Once I exit, then the rest of the things execute but it doesn't work as desired.
I prepended su to the svn command but the command failed (i.e. it didn't update svn in the directory desired).
How do I write a script that allows the user to switch user and invoke svn (among other things)?
Much simpler: use sudo to run a shell and use a heredoc to feed it commands.
#!/usr/bin/env bash
whoami
sudo -i -u someuser bash << EOF
echo "In"
whoami
EOF
echo "Out"
whoami
(answer originally on SuperUser)
The trick is to use "sudo" command instead of "su"
You may need to add this
username1 ALL=(username2) NOPASSWD: /path/to/svn
to your /etc/sudoers file
and change your script to:
sudo -u username2 -H sh -c "cd /home/$USERNAME/$PROJECT; svn update"
Where username2 is the user you want to run the SVN command as and username1 is the user running the script.
If you need multiple users to run this script, use a %groupname instead of the username1
You need to execute all the different-user commands as their own script. If it's just one, or a few commands, then inline should work. If it's lots of commands then it's probably best to move them to their own file.
su -c "cd /home/$USERNAME/$PROJECT ; svn update" -m "$USERNAME"
Here is yet another approach, which was more convenient in my case (I just wanted to drop root privileges and do the rest of my script from restricted user): you can make the script restart itself from the correct user. This approach is more readable than using sudo or su -c with a "nested script". Let's suppose it is started as root initially. Then the code will look like this:
#!/bin/bash
if [ $UID -eq 0 ]; then
user=$1
dir=$2
shift 2 # if you need some other parameters
cd "$dir"
exec su "$user" "$0" -- "$#"
# nothing will be executed beyond that line,
# because exec replaces running process with the new one
fi
echo "This will be run from user $UID"
...
Use a script like the following to execute the rest or part of the script under another user:
#!/bin/sh
id
exec sudo -u transmission /bin/sh - << eof
id
eof
Use sudo instead
EDIT: As Douglas pointed out, you can not use cd in sudo since it is not an external command. You have to run the commands in a subshell to make the cd work.
sudo -u $USERNAME -H sh -c "cd ~/$PROJECT; svn update"
sudo -u $USERNAME -H cd ~/$PROJECT
sudo -u $USERNAME svn update
You may be asked to input that user's password, but only once.
It's not possible to change user within a shell script. Workarounds using sudo described in other answers are probably your best bet.
If you're mad enough to run perl scripts as root, you can do this with the $< $( $> $) variables which hold real/effective uid/gid, e.g.:
#!/usr/bin/perl -w
$user = shift;
if (!$<) {
$> = getpwnam $user;
$) = getgrnam $user;
} else {
die 'must be root to change uid';
}
system('whoami');
This worked for me
I split out my "provisioning" from my "startup".
# Configure everything else ready to run
config.vm.provision :shell, path: "provision.sh"
config.vm.provision :shell, path: "start_env.sh", run: "always"
then in my start_env.sh
#!/usr/bin/env bash
echo "Starting Server Env"
#java -jar /usr/lib/node_modules/selenium-server-standalone-jar/jar/selenium-server-standalone-2.40.0.jar &
#(cd /vagrant_projects/myproj && sudo -u vagrant -H sh -c "nohup npm install 0<&- &>/dev/null &;bower install 0<&- &>/dev/null &")
cd /vagrant_projects/myproj
nohup grunt connect:server:keepalive 0<&- &>/dev/null &
nohup apimocker -c /vagrant_projects/myproj/mock_api_data/config.json 0<&- &>/dev/null &
Inspired by the idea from #MarSoft but I changed the lines like the following:
USERNAME='desireduser'
COMMAND=$0
COMMANDARGS="$(printf " %q" "${#}")"
if [ $(whoami) != "$USERNAME" ]; then
exec sudo -E su $USERNAME -c "/usr/bin/bash -l $COMMAND $COMMANDARGS"
exit
fi
I have used sudo to allow a password less execution of the script. If you want to enter a password for the user, remove the sudo. If you do not need the environment variables, remove -E from sudo.
The /usr/bin/bash -l ensures, that the profile.d scripts are executed for an initialized environment.
I have a bash script that makes a backup of my data files (~50GB). The script is basically something like this:
sudo tar /backup/mydata1 into old-backup-1.tar
sudo tar /backup/mydata2 into old-backup-2.tar
sudo rsync /mydata1 to /backup/mydata1
sudo rsync /mydata2 to /backup/mydata2
(I use sudo because some of the files are owned by root).
The problem is that after every command (because it takes a long time) I loose root privileges and if I'm not present at the computer then the su prompt gets timed out and the script ends in the middle of the job.
Is there a way to retain su privileges during the entire script? What is the best way to approach this situation? I prefer to run the script under my user.
With a second shell:
sudo bash -c "command1; command2; command3; command4"
Perhaps like this:
#!/bin/bash -eu
exec sudo /bin/bash <<'EOF'
echo I am $UID
whoami
#^the script
EOF
Alternatively, you could put something like:
if ! [ $(id -u) -eq 0 ]; then
exec sudo "$0" "$#"
fi
at the top.
How can I run nested shell scripts with the same option? For example,
parent.sh
#!/bin/sh
./child.sh
child.sh
#!/bin/sh
ls
How can I modify parent.sh so that when I run it with sh -x parent.sh, the -x option is effective in child.sh as well and the execution of ls is displayed on my console?
I'm looking for a portable solution which is effective for rare situations such as system users with /bin/false as their registered shell. Will the $SHELL environment variable be of any help?
Clarification: I sometimes want to call parent.sh with -x, sometimes with -e, depending on the situation. So the solution must not involve hard-coding the flags.
If you use bash, i can recommend the following:
#!/bin/bash
export SHELLOPTS
./child.sh
You can propagate as many times as you need, also you can use echo $SHELLOPTS in every script down the line to see what is happening and how options are propagated if you need to understand it better.
But for /bin/sh it will fail with /bin/sh: SHELLOPTS: readonly variable because of how POSIX is enforced on /bin/sh in various systems, more info here: https://lists.gnu.org/archive/html/bug-bash/2011-10/msg00052.html
it's looks like a hack and seems it's not the best way.
But it will do exact what you want
One of the ways how you can do it - it's to create aliases to create wrappers for sh:
alias saveShell='cp /bin/sh $some_safe_place'
alias shx='cp $some_safe_place /bin/x_sh; rm /bin/sh; echo "/bin/x_sh -x $#" > /bin/sh; chmod 755 /bin/sh '
alias she='cp $some_safe_place /bin/e_sh; rm /bin/sh; echo "/bin/e_sh -e $#" > /bin/sh; chmod 755 /bin/sh '
alias restoreShell='cp $some_safe_place /bin/sh'
How to Use:
run saveShell and then use shx or she , if you would change -x on -e run restoreShell and then run shx or she
run script as usually
sh ./parent.sh
BE VERY CAREFUL WITH MOVING SH
Other solution
replace #!/bin/sh to #!/bin/sh -x or #!/bin/sh -e with sed in all sh files before running script.
I'm working on a script that will shred a usb drive and install Kali linux with encrypted persistent data.
#! /bin/bash
cd ~/Documents/Other/ISOs/Kali
echo "/dev/sdx x=?"
read x
echo "how many passes to wipe? 1 will be sufficient."
read n
echo "sd$x will be wiped $n times."
read -p "do you want to continue? [y/N] " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]
then
exit 1
fi
echo "Your role in the installation process is not over. You will be prompted to type YES and a passphrase."
sudo shred -vz --iterations=$n /dev/sd$x
echo "Wiped. Installing Kali"
sudo dd if=kali-linux-2.0-amd64.iso of=/dev/sd$x bs=512k
echo "Installed. Making persistence."
y=3
sudo parted /dev/sd$x mkpart primary 3.5GiB 100%
x=$x$y
sudo cryptsetup --verbose --verify-passphrase luksFormat /dev/sd$x
sudo cryptsetup luksOpen /dev/sd$x my_usb
sudo mkfs.ext3 -L persistence /dev/mapper/my_usb
sudo e2label /dev/mapper/my_usb persistence
sudo mkdir -p /mnt/my_usb
sudo mount /dev/mapper/my_usb /mnt/my_usb
sudo -i
echo "/ union" > /mnt/my_usb/persistence.conf
umount /dev/mapper/my_usb
cryptsetup luksClose /dev/mapper/my_usb
echo "Persistence complete. Installation complete."
It works nearly perfectly. These commands individually entered into the terminal will create the desired effect, but the problem comes in at line 37:
sudo echo "/ union" > /mnt/my_usb/persistence.conf
That command won't work unless I'm logged in as root user. To solve this I tried adding the sudo -i command before, but once I do that all of the following commands are skipped.
It's okay if the solution suggested requires me to type in the password. I don't want the password stored in the script, that's just wreckless.
Side note, I didn't make a generic form for this question because I want other people to be able use this if they like it.
The problem is that the echo runs with root privilege but the redirection happens in the original shell as the non-root user. Instead, try running an explicit sh under sudo and do the redirection in there
sudo /bin/sh -c 'echo "/ union" > /mnt/my_usb/persistence.conf'
The problem is that when you type in the following command:
sudo echo "/ union" > /mnt/my_usb/persistence.conf
Only the "echo" will be run as root through sudo, but the redirection to the file using > will still be executed as the "normal" user, because it is not a command but something performed directly by the shell.
My usual solution is to use teeso that it runs as a command and not as a shell built-in operation, like this:
echo "/ union" | sudo tee /mnt/my_usb/persistence.conf >/dev/null
Now the tee command will be run as root through sudo and will be allowed to write to the file. >/dev/null is just added to keep the output of the script clean. If you ever want to append instead of overwrite (e.g. you would be using >>normally), then use tee -a.
I'm new to Ubuntu and bash scripts, but I just made runUpdates.sh and added this to my .profile to run it:
if [ -f "$HOME/bin/runUpdates.sh" ]; then
. "$HOME/bin/runUpdates.sh"
fi
The problem I'm having is, I want the script to run as if root is running it (because I don't want to type my sudo password)
I found a few places that I should be able to do sudo chown root.root <my script> and sudo chmod 4755 <my script> and when I run it, it should run as root. But it's not...
The script looks good to me. What am I missing? -rwxr-xr-x 1 root root 851 Mar 23 21:14 runUpdates.sh*
Can you please help me run the commands in this script as root? I don't really want to change the sudors file, I really just want to run the commands in this script at root (if possible).
#!/bin/sh
echo "user is ${USER}"
#check for updates
update=`cat /var/lib/update-notifier/updates-available | head -c 2 | tail -c 1`;
if [ "$update" = "0" ]; then
echo -e "No updates found.\n";
else
read -p "Do you wish to install updates? [yN] " yn
if [ "$yn" != "y" ] && [ "$yn" != "Y" ]; then
echo -e 'No\n';
else
echo "Please wait...";
echo `sudo apt-get update`;
echo `sudo apt-get upgrade`;
echo `sudo apt-get dist-upgrade`;
echo -e "Done!\n";
fi
fi
#check for restart
restartFile=`/usr/lib/update-notifier/update-motd-reboot-required`;
if [ ! -z "$restartFile" ]; then
echo "$restartFile";
read -p "Do you wish to REBOOT? [yN] " yn
if [ "$yn" != "y" ] && [ "$yn" != "Y" ]; then
echo -e 'No\n';
else
echo `sudo shutdown -r now`;
fi
fi
I added the user is to debug, it always outputs my user not root, and prompts for the sudo password (since I'm calling the commands with sudo) or tells me are you root? (if I remove sudo)
Also, is there a way to output the update commands stdout in real time, not just one block when they finish?
(I also tried with the shebang as #!/bin/bash)
setuid does not work on shell scripts for security reasons. If you want to run a script as root without a password, you can edit /etc/sudoers to allow it to be run with sudo without a password.
To "update in real time", you would run the command directly instead of using echo.
Its not safe to do, you should probably use sudoers but if you really need/want to, you can do it with something like this:
echo <root password> | sudo -S echo -n 2>/dev/random 1>/dev/random
sudo <command>
This works because sudo doesn't require a password for a brief window after successfully being used.
SUID root scripts were phased out many years ago if you really want to run scripts as root you need to wrap them in an executable, you can see an example on how to do this on my blog:
http://scriptsandoneliners.blogspot.com/2015/01/sanitizing-dangerous-yet-useful-commands.html
The example is how to change executable permissions and place a filter around other executables using a shell script but the concept of wrapping a shell script works for SUID as well, the resulting executable file from the shell script can be made SUID.
https://help.ubuntu.com/community/Sudoers