I have the following script:
cat > /tmp/script.sh <<EndOfScript
#!/bin/sh
ulimit -n 8192
run_app
EndOfScript
which runs smoothly locally, it is always ok. But if I try to run it remotely through ssh:
scp /tmp/script.sh user#host:/tmp/script.sh
ssh user#host "chmod 755 /tmp/script.sh; /tmp/script.sh"
I got the error:
ulimit: open files: cannot modify limit: Operation not permitted
I also tried the following command:
ssh user#host "ulimit -n 8192"
same error.
It looks like that ssh remote command execution is enforcing a 1024 hard limit on nofile limit, but I can not find out how to modify this default value. I tried to modify /etc/security/limits.conf and restart sshd, still the same error.
Instead of using the workaround of /etc/initscript (and do not make a typo in that file.. :), if you just want sshd to honor the settings you made in /etc/security/limits.conf, you should make sure you have UsePAM yes in /etc/ssh/sshd_config, and /etc/pam.d/sshd lists session required pam_limits.so (or otherwise includes another file that does so).
That should be all there is to it.
In older versions od openssh (<3.6 something) there was also a problem with UsePrivilegeSeparation that prevented limits being honored, but it was fixed in newer versions.
Fiannly figured out the answer: add the following to /etc/initscript
ulimit -c unlimited
ulimit -HSn 65535
# Execute the program.
eval exec "$4"
ulimit requires superuser privileges to run.
I would suggest you to ask the server administrator to modify that value for you on the server you are trying to run the script on.
He/She can do that by modifying /etc/secutiry/limits.conf on Linux. Here is an example that might help:
* soft nofile 8192
* hard nofile 8192
After that, you don't need to restart sshd. Just logout and login again.
I would suggest you to ask the same question in ServerFault though. You'll get better server-side related answers there.
Check the start up scripts (/etc/profile, ~/.??*) for a call to ulimit. IIRC, once a limit has been imposed, it can't be widened anymore.
Related
In Ubuntu Mate 16.04.4LTS, every time I run the command:
$ ulimit -a
I get:
open files (-n) 1024
I tried to increase this limit adding at the /etc/security/limits.conf the command:
myusername hard nofile 100000
but doesn't matter this value 1024 persist if I run ulimit -a. I rebooted the system after the modification yet the problem persist.
Also, if I run
ulimit -n 100000
I get the response:
ulimit: open files: cannot modify limit: Operation not permitted
and if I run
sudo ulimit -n 100000
I get:
sudo: ulimit: command not found
Any ideas on how to increse that limit?
thx
From man bash under ulimit:
-n The maximum number of open file descriptors (most systems do not allow this value to be set)
Maybe your problem is simply that your system does not support modifying this limit?
I found the solution, just after I posted this question. Based on:
https://askubuntu.com/questions/162229/how-do-i-increase-the-open-files-limit-for-a-non-root-user
I also edited:
/etc/pam.d/common-session
and added the following line to the end:
session required pam_limits.so
All works now.
I have to execute a process on a cluster of machines. Size of cluster is of order 100. So I cannot execute processes manually, I have to execute them by script(which uses ssh, currently I am using python-paramiko for this). Number of tcp sockets these processes open is more than 1024(default limit of linux.) So I need to change that using {ulimit -n 10000}. This makes the changes for that shell session only. And this command works only with root user. So my script is not able to do that.
I tried to execute this command
sudo su && ulimit -n 10000 && <commandToExecuteMyProcess>
But this didn't work. The commands after "sudo su" didn't execute at all. They execute only when I logout of the su session.
This article shows way to make the changes permanently. But when I open limits.conf, I didn't find anything there. It only has some commented notes.
Please suggest me some way to increase the limit permanently or change it by script for each session.
That's not how it works: sudo su just opens a new shell so you can introduce commands as root, and after you exit that shell it executes the rest of the line as normal user.
Second: your this is a special case because ulimit is not actually a program, but a bash shell built-in command, so it must be used within bash, that is why something like sudo ulimit -n 10000 won't work: sudo can't find that program because it doesn't exist.
So, the only alternative is a bit ugly but works:
sudo bash -c 'ulimit -n 10000 && <command>'
Everything inside '...' will execute in a bash session of the root user.
Note that you can replace && with ; in this case: that's because it is being executed as root and ulimit -n 10000 will always complete successfully.
I have Linux server (CentOS release 6.4) which is able to process source code sent by users. On the server is a Java application which starts a bash script which will run compilation and execution commands of these source codes in a limited way (time and memory are limited, no Internet, executed by limited user).
The Java program must be always be running, so it can register new job requests.
When started, the Java program works fine, but after some time (talking in days), commands are not executed properly. I get the following error message:
sudo: sorry, you must have a tty to run sudo
the line which is causing that is:
sudo -u codiana $COMMAND &
where $COMMAND is command to execute along with its arguments
After application restart (kill and start again) everything works.
Is there some time limit on Linux which can cause that?
You can comment /etc/sudoers:
#Defaults requiretty
Edit:
man sudoers | grep requiretty -A 5
requiretty If set, sudo will only run when the user is logged in
to a real tty. When this flag is set, sudo can only be
run from a login session and not via other means such
as cron(8) or cgi-bin scripts. This flag is off by
default.
So if this is not desired open /etc/sudoers with you text editor of choice and comment out this line.
I have a web server (odin) and a backup server (jofur). On jofur, I can run the following code to rsync my web directories (via key authentication) from odin to jofur:
rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
If I enter this into the command line, everything rsyncs perfectly:
myuser#jofur:~$ rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
receiving incremental file list
sent 23 bytes received 1921 bytes 1296.00 bytes/sec
total size is 349557271 speedup is 179813.41
I want this to run every morning, so I edited my crontab to read this:
0 4 * * * rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
This doesn't work. The following message is deposited in /var/mail/myuser:
Could not create directory '/home/myuser/.ssh'.
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(605) [Receiver=3.0.9]
I'm not sure what this error means. I'm wary of futzing blindly with permissions because I don't want to leave any backdoors open. Any suggestions?
Its hard to tell whether cron is using the wrong rsync binary or whether rsync requires some variable which is not being set in cron. Please set the stdout/stderr as shown below and pass on the output of the log file
Also, try doing a "which rsync" from the command line ; this will tell you which rsync you are using from the command line.
0 4 * * * rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin > /tmp/cron_output.log 2>&1
EDIT :
Can you create a shell script called SOME_DIR/cron_job_rsync.sh which contains the following. Make sure you set the execute bit.
#!/bin/sh
/usr/sbin/rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
And modify the cronjob as shown below
0 4 * * * SOME_DIR/cron_job_rsync.sh >/tmp/cron_output.log 2>&1
I had a similar issue. Mine was the HOME directory was encrypted.
If your user is logged, it works the known_hosts.
But when it's a cron, the cron uses the right user BUT it does not have access to your $HOME/~/.ssh directory because is encrypted :-(
i got the same error just like you.
I finally found user home directory is an 'mount point', when logged in, it changed.
You can use the shell command 'mount' to check if you have the same way to use home directory.
So, i logged in and 'cd /', then do
```
cp -ar ${HOME}/.ssh /tmp/
sudo umount ${HOME}
mv /tmp/.ssh ${HOME}
```
There is may failed, because you need to check the ${HOME} if you have the right to write, if not, try sudo or add writable to ${HOME}.
After that, every thing being fine.
Please follow the below steps to avoid the error
http://umasarath52.blogspot.in/2013/09/solved-rsync-not-executing-via-cron.html
I resolved this issue by communicating with the administrators for my server. Here is what they told me:
For advanced security and performance, we use 1H (Hive) which
utilizes a chrooted environment for users. Libraries and binaries
should be copied to the chrooted environment to make them accessible.
They sent me a follow up email telling me that the "Relevent" packages have been installed. At that point, the problem was resolved. Unfortunately, I didn't get any additional information from them. The host was Arvixe, but I'm guessing that anyone using 1H (Hive) will encounter a similar problem. Hopefully this answer will be helpful.
Use the rrsync script together with a dedicated ssh key as follows:
REMOTE server
mkdir ~/bin
gunzip /usr/share/doc/rsync/scripts/rrsync.gz -c > ~/bin/rrsync
chmod +x ~/bin/rrsync
LOCAL computer
ssh-keygen -f ~/.ssh/id_remote_backup -C "Automated remote backup" #NO passphrase
scp ~/.ssh/id_remote_backup.pub devel#10.10.10.83:/home/devel/.ssh
REMOTE computer
cat id_remote_backup.pub >> authorized_keys
Prepend to the newly added line the following
command="$HOME/bin/rrsync -ro ~/backups/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding
So that the result looks like
command="$HOME/bin/rrsync -ro ~/backups/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding ssh-rsa AAA...vp Automated remote backup
LOCAL
Put in your crontab the following script with x permission:
#!/bin/sh
echo ""
echo ""
echo "CRON:" `date`
set -xv
rsync -e "ssh -i $HOME/.ssh/id_remote_backup" -avzP devel#10.10.10.83:/ /home/user/servidor
Source: http://www.guyrutenberg.com/2014/01/14/restricting-ssh-access-to-rsync/
I did several steps to make it work.
Check your paths. For every command you'll use check which [command] and use that full path for the crontab
Open crontab as the user you want to run it with so it has access to that users ssh-key
Add (remember to user which) ssh-agent && [your ssh-command] so it can connect over ssh.
When authentication still fails at this point. Try to generate a passwordless ssh-key. This way you can skip the password prompting.
For debugging it is useful to add -vvv to the ssh command in rsync. It makes it clear what goes wrong.
Using the correct keyring solved the issue for me. Add the following line to your crontab:
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh
In total, your crontab (edited by calling crontab -e from your terminal) should now look as follows:
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh
0 4 * * * rsync -avz backups#odin.mydomain.net:/home/backups /home/myuser/odin
Background: It turns out that some Linux distributions use a keyring to protect your public-private key pair - so the key pair is password-protected without ever noticing you. Consequently, rsync is not able to open your ssh key for authentication.
Note that I also omitted the -e ssh; I think it is not necessary here.
Further Troubleshooting: rsync does not provide a lot of debugging output. What helped me identify the problem was to put a dummy scp command, which is much more verbose, in my crontab. A crontab entry for troubleshooting may look something like:
* * * * * scp -v backups#odin.mydomain.net:/home/backups/dummy.txt /home/myuser/odin/dummy.txt >> /home/myuser/odin/dummy.txt.log 2>&1
The above command will run every minute (great for developing) and it will copy a file /home/backups/dummy.txt to your local machine. All logs (stdout and stderr) are written to to /home/myuser/odin/dummy.txt.log. Inspect these logs to see where the error precisely comes from.
Reference: The troubleshooting explained above lead me to the solution: https://unix.stackexchange.com/a/332353/395749
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
When running my application I sometimes get an error about too many files open.
Running ulimit -a reports that the limit is 1024. How do I increase the limit above 1024?
Edit
ulimit -n 2048 results in a permission error.
You could always try doing a ulimit -n 2048. This will only reset the limit for your current shell and the number you specify must not exceed the hard limit
Each operating system has a different hard limit setup in a configuration file. For instance, the hard open file limit on Solaris can be set on boot from /etc/system.
set rlim_fd_max = 166384
set rlim_fd_cur = 8192
On OS X, this same data must be set in /etc/sysctl.conf.
kern.maxfilesperproc=166384
kern.maxfiles=8192
Under Linux, these settings are often in /etc/security/limits.conf.
There are two kinds of limits:
soft limits are simply the currently enforced limits
hard limits mark the maximum value which cannot be exceeded by setting a soft limit
Soft limits could be set by any user while hard limits are changeable only by root.
Limits are a property of a process. They are inherited when a child process is created so system-wide limits should be set during the system initialization in init scripts and user limits should be set during user login for example by using pam_limits.
There are often defaults set when the machine boots. So, even though you may reset your ulimit in an individual shell, you may find that it resets back to the previous value on reboot. You may want to grep your boot scripts for the existence ulimit commands if you want to change the default.
If you are using Linux and you got the permission error, you will need to raise the allowed limit in the /etc/limits.conf or /etc/security/limits.conf file (where the file is located depends on your specific Linux distribution).
For example to allow anyone on the machine to raise their number of open files up to 10000 add the line to the limits.conf file.
* hard nofile 10000
Then logout and relogin to your system and you should be able to do:
ulimit -n 10000
without a permission error.
1) Add the following line to /etc/security/limits.conf
webuser hard nofile 64000
then login as webuser
su - webuser
2) Edit following two files for webuser
append .bashrc and .bash_profile file by running
echo "ulimit -n 64000" >> .bashrc ; echo "ulimit -n 64000" >> .bash_profile
3) Log out, then log back in and verify that the changes have been made correctly:
$ ulimit -a | grep open
open files (-n) 64000
Thats it and them boom, boom boom.
If some of your services are balking into ulimits, it's sometimes easier to put appropriate commands into service's init-script. For example, when Apache is reporting
[alert] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread
Try to put ulimit -s unlimited into /etc/init.d/httpd. This does not require a server reboot.