ulimit -s on Netbeans 8.0.2 - linux

I want to run a C program in NetBeans 8.0.2 (on Xubuntu 14.04) with ulimit -s set. I've already tried on Re-run with arguments writing ulimit -s 2048; "${OUTPUT_PATH}", but it shows me this error:
/bin/sh: 1: exec: ulimit: not found
I don't want to compile the program on my own in order to set ulimit on the terminal.

This doesn't look like a C question.
Anyway, on Linux, ulimitis not a system command, it's a bash builtin. Unless /bin/sh is linked to bash (which it is usually not) the command won't be known to the shell.
try /bin/bash -c ulimit -s 2048 instead.
Note that this new limit will only be active in this particular shell - once you return from it, you'll see whatever you had before.

Related

get the core file in perl script with backtick

On my ubuntu 14.04 (Linux 3.19.0 64bit) PC, I ran a perl program that has the following in a loop
$params = setupParams();
$ret = `SOME_CMD $params`;
...
But for some reason, SOME_CMD sometimes gaves Segmentation fault (core dumped) occasionally. In order to figure out the cause of the core dump, I need to get the core file.
Unfortunately I tried ulimit -S -c 0 on the terminal where I ran the perl script, but it didn't produce a core file.
Any ideas would be appreciated.
ulimit -c 0 prevents core files to be written. You need to use
ulimit -c unlimited
Btw: you should upgrade to a maintained OS.

Linux: how to change maximum number of files a process can open?

I have to execute a process on a cluster of machines. Size of cluster is of order 100. So I cannot execute processes manually, I have to execute them by script(which uses ssh, currently I am using python-paramiko for this). Number of tcp sockets these processes open is more than 1024(default limit of linux.) So I need to change that using {ulimit -n 10000}. This makes the changes for that shell session only. And this command works only with root user. So my script is not able to do that.
I tried to execute this command
sudo su && ulimit -n 10000 && <commandToExecuteMyProcess>
But this didn't work. The commands after "sudo su" didn't execute at all. They execute only when I logout of the su session.
This article shows way to make the changes permanently. But when I open limits.conf, I didn't find anything there. It only has some commented notes.
Please suggest me some way to increase the limit permanently or change it by script for each session.
That's not how it works: sudo su just opens a new shell so you can introduce commands as root, and after you exit that shell it executes the rest of the line as normal user.
Second: your this is a special case because ulimit is not actually a program, but a bash shell built-in command, so it must be used within bash, that is why something like sudo ulimit -n 10000 won't work: sudo can't find that program because it doesn't exist.
So, the only alternative is a bit ugly but works:
sudo bash -c 'ulimit -n 10000 && <command>'
Everything inside '...' will execute in a bash session of the root user.
Note that you can replace && with ; in this case: that's because it is being executed as root and ulimit -n 10000 will always complete successfully.

Why are my ulimit settings ignored in the shell?

I have to execute a .jar, and I need to use ulimit before this execution, so I wrote a shell script:
#!/bin/sh
ulimit -S -c unlimited
/usr/java/jre1.8.0_91/bin/java -jar /home/update.jar
But the ulimit seems to be ignored, because I have this error :
java.lang.InternalError: java.io.FileNotFoundException: /usr/java/jre1.8.0_91/lib/ext/localedata.jar (Too many open files)
If you want to change the maximum open files you need to use ulimit -n.
Example:
ulimit -n 8192
The -c option is changing the core file size (core dumps), not the maximum open files.
You need to apply the ulimit to the shell that will call the java application.

Suppress 'Warning: no access to tty' in ssh

I have a short simple script, that compiles a .c file and runs it on a remote server running tcsh and then just gives back control to my machine (this is for school, I need my programs to work properly on the lab computers but want to edit them etc. on my machine). It runs commands this way:
ssh -T user#server << EOF
cd cs4400/$dest
gcc -o $efile $file
./$efile
EOF
So far it works fine, but it gives this warning every time I do this:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
I know this technically isn't a problem, but it's SUPER annoying. I'm trying to do school work, checking the output of my program etc., and this clutters everything, and I HATE it.
I'm running this version of ssh on my machine:
OpenSSH_6.1p1 Debian-4, OpenSSL 1.0.1c 10 May 2012
This version of tcsh on the server:
tcsh 6.17.00 (Astron) 2009-07-10 (x86_64-unknown-linux)
And this version of ssh on the server:
OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010
The message is actually printed by shell, in this case tcsh.
You can use
strings /usr/bin/tcsh | grep 'no access to tty'
to ensure that it belongs to tcsh itself.
It is related to ssh only very loosely, ie ssh in this case is just the trigger, not the cause.
You should either change your approach and not use HERE DOCUMENT. Instead place executable custom_script into /path/custom_script and run it via ssh.
# this will work
ssh user#dest '/path/custom_script'
Or, just run complex command as a oneliner.
# this will work as well
ssh user#dest "cd cs4400/$dest;gcc -o $efile $file;./$efile"
On OS X, I solved a similar problem (for script provisioning on Vagrant) with ssh -t -t (note that -t comes twice).
Advice based on the ssh BSD man page:
-T Disable pseudo-terminal allocation.
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can
be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
If running tcsh is not important for you, specify a different shell and it will work:
ssh -T user#server bash << EOF
cd cs4400/$dest
gcc -o $efile $file
./$efile
EOF

ssh remote command execution and ulimit

I have the following script:
cat > /tmp/script.sh <<EndOfScript
#!/bin/sh
ulimit -n 8192
run_app
EndOfScript
which runs smoothly locally, it is always ok. But if I try to run it remotely through ssh:
scp /tmp/script.sh user#host:/tmp/script.sh
ssh user#host "chmod 755 /tmp/script.sh; /tmp/script.sh"
I got the error:
ulimit: open files: cannot modify limit: Operation not permitted
I also tried the following command:
ssh user#host "ulimit -n 8192"
same error.
It looks like that ssh remote command execution is enforcing a 1024 hard limit on nofile limit, but I can not find out how to modify this default value. I tried to modify /etc/security/limits.conf and restart sshd, still the same error.
Instead of using the workaround of /etc/initscript (and do not make a typo in that file.. :), if you just want sshd to honor the settings you made in /etc/security/limits.conf, you should make sure you have UsePAM yes in /etc/ssh/sshd_config, and /etc/pam.d/sshd lists session required pam_limits.so (or otherwise includes another file that does so).
That should be all there is to it.
In older versions od openssh (<3.6 something) there was also a problem with UsePrivilegeSeparation that prevented limits being honored, but it was fixed in newer versions.
Fiannly figured out the answer: add the following to /etc/initscript
ulimit -c unlimited
ulimit -HSn 65535
# Execute the program.
eval exec "$4"
ulimit requires superuser privileges to run.
I would suggest you to ask the server administrator to modify that value for you on the server you are trying to run the script on.
He/She can do that by modifying /etc/secutiry/limits.conf on Linux. Here is an example that might help:
* soft nofile 8192
* hard nofile 8192
After that, you don't need to restart sshd. Just logout and login again.
I would suggest you to ask the same question in ServerFault though. You'll get better server-side related answers there.
Check the start up scripts (/etc/profile, ~/.??*) for a call to ulimit. IIRC, once a limit has been imposed, it can't be widened anymore.

Resources