sudo not working correctly after some time - linux

I have Linux server (CentOS release 6.4) which is able to process source code sent by users. On the server is a Java application which starts a bash script which will run compilation and execution commands of these source codes in a limited way (time and memory are limited, no Internet, executed by limited user).
The Java program must be always be running, so it can register new job requests.
When started, the Java program works fine, but after some time (talking in days), commands are not executed properly. I get the following error message:
sudo: sorry, you must have a tty to run sudo
the line which is causing that is:
sudo -u codiana $COMMAND &
where $COMMAND is command to execute along with its arguments
After application restart (kill and start again) everything works.
Is there some time limit on Linux which can cause that?

You can comment /etc/sudoers:
#Defaults requiretty
Edit:
man sudoers | grep requiretty -A 5
requiretty If set, sudo will only run when the user is logged in
to a real tty. When this flag is set, sudo can only be
run from a login session and not via other means such
as cron(8) or cgi-bin scripts. This flag is off by
default.
So if this is not desired open /etc/sudoers with you text editor of choice and comment out this line.

Related

Running sudo scripts/bash commands on a remote

I need to remotely start bash scripts that perform sudo tasks, such as chmod and ntpdate and echoing to gpio.
A cron job might be the best solution for some of this, but cron is giving me headaches. I'd like to pass on this venue if I can...
I've confirmed that my scripts work locally (I can ssh into the machine and run them without a hiccup.)
However, If I try to run them remotely like so: (this is within a C++ system call)
ssh user#pc 'bash -s' < /home/user/myScript.sh
Commands with sudo fail.
sudo chmod fails with: no tty present and no askpass program specified
echo to gpio fails with: write error: Device or resource busy
sudo ntpdate fails with: no tty present and no askpass program specified
Can anyone help explain, or help me determine whats happening here?
I'm open to band-aids and different approaches, thanks!
You already found the problem yourself:
sudo chmod fails with: no tty present and no askpass program specified
If you run you shell script via ssh and the script wants to run the command sudo, sudo itself will ask for the users password. But the ssh session is not a tty! How should sudo now prompt for a password and how to get your password?
You can do it if you provide the password in the script ( what makes it very dangerous if someone else can read that script! )
script.sh:
echo "your passwd" | sudo -S
As alternative solution you can run the ssh session with a more privileged user.
ssh privileged_user#pc 'bash -s' < /home/user/myScript.sh
All that comes with some danger. Running all commands from the cript with a more privileged user can also be dangerous!

Why might I get this error on a script that has been running fine for a year? - sudo: sorry, you must have a tty to run sudo

I have a script that runs nightly. The userid is set up in sudoers to perform these functions. I do not intend to disable "Defaults requiretty", particularly without knowing why it's suddenly a problem now.
Here's what it does with sudo:
sudo lvcreate -- size 19000M –snapshot –name snap_u /dev/mapper/vg_u-lvu
sudo mount /dev/vg_u/snap_u /snapshot
sudo rsync -av --delete --bwlimit=12000 –exclude usr/spoolhold --exclude email --exclude tempfile /snapshot/ /u1/prev/dir
sudo umount /snapshot
sudo lvremove -f /dev/vg_u/snap_u
For the past few weeks it doesn't work most of the time. Sometimes when I run the commands "manually" it works fine. When it fails I see this message filling the log file:
sudo: sorry, you must have a tty to run sudo
The problem began when I switched some other scripts for a remote backup. The only things I changed in this script were comments. This script is invoked by an application program that uses ‘nohup’ to run it in the background.
During my testing I killed the process to stop it from running in the background when I wanted to run it again immediately. Since then I’ve had this problem. So, my questions are these:
Could this error be related to ‘killing’ those processes (Maybe I killed the wrong one)?
Any ideas for a solution?
1) Could this error be related to ‘killing’ those processes (Maybe I killed the wrong one)?
No
2) Any ideas for a solution?
This is related to requiretty configuration option in /etc/sudoers. It probably changed in there or in default during some of the updates. Set it to off and you should be good.

Linux: how to change maximum number of files a process can open?

I have to execute a process on a cluster of machines. Size of cluster is of order 100. So I cannot execute processes manually, I have to execute them by script(which uses ssh, currently I am using python-paramiko for this). Number of tcp sockets these processes open is more than 1024(default limit of linux.) So I need to change that using {ulimit -n 10000}. This makes the changes for that shell session only. And this command works only with root user. So my script is not able to do that.
I tried to execute this command
sudo su && ulimit -n 10000 && <commandToExecuteMyProcess>
But this didn't work. The commands after "sudo su" didn't execute at all. They execute only when I logout of the su session.
This article shows way to make the changes permanently. But when I open limits.conf, I didn't find anything there. It only has some commented notes.
Please suggest me some way to increase the limit permanently or change it by script for each session.
That's not how it works: sudo su just opens a new shell so you can introduce commands as root, and after you exit that shell it executes the rest of the line as normal user.
Second: your this is a special case because ulimit is not actually a program, but a bash shell built-in command, so it must be used within bash, that is why something like sudo ulimit -n 10000 won't work: sudo can't find that program because it doesn't exist.
So, the only alternative is a bit ugly but works:
sudo bash -c 'ulimit -n 10000 && <command>'
Everything inside '...' will execute in a bash session of the root user.
Note that you can replace && with ; in this case: that's because it is being executed as root and ulimit -n 10000 will always complete successfully.

How to Open xterm window from a terminal and run a command in background from xterm?

My application tries to execute roots command "sudo ifup eth0" and "sudo ifdown eth0". But it returned an error "sudo: sorry, you must have a tty to run sudo".
So, it requires a tty to execute the sudo commands. So, I tried to execute the commands by opening tty sessions
gnome-terminal --command="sudo ifdown eth0" &
xterm -e "sudo ifdown eth0" &
then it worked fine. But I am not able to send the command from newly created gnome-terminal or xterm.
i.e., if I close the newly created gnome or xterm windows before they had executed the commands, then the commands were terminated immediately.
Can you give suggestion how to disable the window from closing by the user
or
how to make it invisible to the user?
Note: you can test this by using system-config-network command instead of ifdown and ifup
I would suggest not to use xterm or gnome-terminal to provide a terminal for sudo, but to deal with the "sorry, you must have a tty to run sudo" message directly.
There is a requiretty option in the sudoers file that makes sudo demand a terminal. If this option is unset with !requiretty and the command is executed with the NOPASSWD option sudo should run without the need to open a new terminal window. There are more details in this serverfault post.
That is how sudo is used for instance in cron scripts.
Since requiretty option provides additional security in an environment where sudo is used not only in cron scripts but to let remote users issue commands with elevated privileges, the action of !requiretty can be restricted.
User_Alias LOCAL_USERS = john, mary
Cmnd_Alias NETWORK_SCRIPTS = /sbin/ifup, /sbin/ifdown
Defaults!NETWORK_SCRIPTS !requiretty
LOCAL_USERS ALL = NOPASSWD: NETWORK_SCRIPTS
If you run your code within X session, then you can use gksudo instead of sudo:
gksudo -m "Your message" /command/to/run
It will prompt user for password (if needed) using nice GUI interface. No need to xterm or gnome-terminal.
Effect will be more secure than allowing particular command to run without any password and solution will be more consistent to what users are used to.
In general, sudo or su need to prompt for a password, or programs could escalate their privileges without user intervention. If you application needs to elevate for some purpose, you will need to use an xterm or similar. There are difficulties though in getting the return code back (konsole might need --nofork and gnome-terminal might need --disable-factory, but the options sadly vary by version), and it's not easy to get it right on every system. Most unixes and linux distributions provide xterm, but some old Fedora/RHEL/CentOS provide X without xterm, so it's another dependency to think about.
The command launched by xterm -e sudo -- ... can then do the standard double-fork and setsid. Once the user has entered his password in the xterm, it goes away immediately, but your command runs in the background with elevated privileges. It can connect back to the original program using a socket or fifo to run as a root co-process.
The daemon or disown commands or similar might be useful if you want to wrap an existing application in a double-fork & setsid (eg, xterm -e sudo -- daemon system-config-network or perhaps xterm -e sudo -- bash -c "system-config-network & disown -a").

Run script with rc.local: script works, but not at boot

I have a node.js script which need to start at boot and run under the www-data user. During development I always started the script with:
su www-data -c 'node /var/www/php-jobs/manager.js
I saw exactly what happened, the manager.js works now great. Searching SO I found I had to place this in my /etc/rc.local. Also, I learned to point the output to a log file and to append the 2>&1 to "redirect stderr to stdout" and it should be a daemon so the last character is a &.
Finally, my /etc/rc.local looks like this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
su www-data -c 'node /var/www/php-jobs/manager.js >> /var/log/php-jobs.log 2>&1 &'
exit 0
If I run this myself (sudo /etc/rc.local): yes, it works! However, if I perform a reboot no node process is running, the /var/log/php-jobs.log does not exist and thus, the manager.js does not work. What is happening?
In this example of a rc.local script I use io redirection at the very first line of execution to my own log file:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
exec 1>/tmp/rc.local.log 2>&1 # send stdout and stderr from rc.local to a log file
set -x # tell sh to display commands before execution
/opt/stuff/somefancy.error.script.sh
exit 0
On some linux's (Centos & RH, e.g.), /etc/rc.local is initially just a symbolic link to /etc/rc.d/rc.local. On those systems, if the symbolic link is broken, and /etc/rc.local is a separate file, then changes to /etc/rc.local won't get seen at bootup -- the boot process will run the version in /etc/rc.d. (They'll work if one runs /etc/rc.local manually, but won't be run at bootup.)
Sounds like on dimadima's system, they are separate files, but /etc/rc.d/rc.local calls /etc/rc.local
The symbolic link from /etc/rc.local to the 'real' one in /etc/rc.d can get lost if one moves rc.local to a backup directory and copies it back or creates it from scratch, not realizing the original one in /etc was just a symbolic link.
I ended up with upstart, which works fine.
In Ubuntu I noticed there are 2 files. The real one is /etc/init.d/rc.local; it seems the other /etc/rc.local is bogus?
Once I modified the correct one (/etc/init.d/rc.local) it did execute just as expected.
You might also have made it work by specifying the full path to node. Furthermore, when you want to run a shell command as a daemon you should close stdin by adding 1<&- before the &.
I had the same problem (on CentOS 7) and I fixed it by giving execute permissions to /etc/local:
chmod +x /etc/rc.local
if you are using linux on cloud, then usually you don't have chance to touch the real hardware using your hands. so you don't see the configuration interface when booting for the first time, and of course cannot configure it. As a result, the firstboot service will always be in the way to rc.local. The solution is to disable firstboot by doing:
sudo chkconfig firstboot off
if you are not sure why your rc.local does not run, you can always check from /etc/rc.d/rc file because this file will always run and call other subsystems (e.g. rc.local).
I got my script to work by editing /etc/rc.local then issuing the following 3 commands.
sudo mv /filename /etc/init.d/
sudo chmod +x /etc/init.d/filename
sudo update-rc.d filename defaults
Now the script works at boot.
I am using CentOS 7.
$ cd /etc/profile.d
$ vim yourstuffs.sh
Type the following into the yourstuffs.sh script.
type whatever you want here to execute
export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:$LD_LIBRARY_PATH
Save and reboot the OS.
I have used rc.local in the past. But I have learned from my experience that the most reliable way to run your script at the system boot time is is to use #reboot command in crontab. For example:
#reboot path_to_the_start_up_script.sh
This is most probably caused by a missing or incomplete PATH environment variable.
If you provide full absolute paths to your executables (su and node) it will work.
It is my understanding that if you place your script in a certain RUN Level, you should use ln -s to link the script to the level you want it to work in.
first make the script executable using
sudo chmod 755 /path/of/the/file.sh
now add the script in the rc.local
sh /path/of/the/file.sh
before exit 0
in the rc.local,
next make the rc.local to executable with
sudo chmod 755 /etc/rc.local
next to initialize the rc.local use
sudo /etc/init.d/rc.local start
this will initiate the rc.local
now reboot the system.
Done..
I found that because I was using a network-oriented command in my rc.local, sometimes it would fail. I fixed this by putting sleep 3 at the top of my script. I don't know why but it seems when the script is run the network interfaces aren't properly configured or something, and this just allows some time for the DHCP server or something. I don't fully understand but I suppose you could give it a try.
I had exactly same issue, the script was running fine locally but when I reboot/power-on it was not.
I resolved the issue by changing the file path. Basically need to give the complete path in the script. While running locally, file can be accessed but when running on reboot, local path will not be understood.
1 Do not recommend using root to run the apps such as node app.
Well you can do it but may catch more exceptions.
2 The rc.local normally runs as root user.
So if the your script should runs as another user such as www U should make sure the PATH and other environment is ok.
3 I find a easy way to run a service as a user:
sudo -u www -i /the/path/of/your/script
Please prefer the sudo manual~
-i [command]
The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a loginshell...
rc.local only runs on startup. If you reboot and want the script to execute, it needs to go into the rc.0 file starting with the K99 prefix.

Resources