StreamSets Data Collector Installation - linux

I am having trouble manually installing the Full Tarball of StreamSets Data Collector. I am running Ubuntu in a VM setting with over 30GB of storage space.
I have read the Manual Page from the StreamSets website, but it's not useful.
Here is what I have done so far:
I have downloaded the full tarball to my Home/Downloads
I have extracted the tarball to my Home/Downloads folder and I have the directory streamsets-datacollector-3.13.0 with all of its subdirectories
Now when I try bin/streamsets dc I get the following errors:
WARN: could not determine Java environment version; expected 1.8, which are the supported versions
Configuration of maximum open file limit is too low: 1024 (expected at least 32768).
I have downloaded all java files using apt install java*
and I have tried to change the limits in the /etc/security/limits.conf
as proven below:
#* soft core 0
#root hard core 100000
#* hard rss 10000
##student hard nproc 20
##faculty soft nproc 20
##faculty hard nproc 50
#ftp hard nproc 0
#ftp - chroot /ftp
##student - maxlogins 4
# End of file
* soft nproc 33000
* hard nproc 33000
* soft nofile 33000
* hard nofile 33000
I even did a system reboot after. However, when I type ulimit -n it still gives me the default 1024.
How should I fix this error?

you just need type "ulimit -n 32768" to change

Related

Too many open files error but 99.5% inodes are free

I am getting the error "Too many open files" but 99.5% of inodes are free. The ulimit is 1024 for soft and
and 4076 for hard. Is it possible that the error may be due to some other problem?
Inodes are not related to open files. You can check current open files using lsof (sth. like lsof | wc -l). I would suggest to just raise the limit in the /etc/security/limits.conf
Try adding something like:
* soft nofile 20000
* hard nofile 20000
And see if that helps.

OpenMPI and OpenFabrics registering physical memory warning

I start mpirun with command:
mpirun -np 2 prog
and get next output:
--------------------------------------------------------------------------
WARNING: It appears that your OpenFabrics subsystem is configured to only
allow registering part of your physical memory. This can cause MPI jobs to
run with erratic performance, hang, and/or crash.
This may be caused by your OpenFabrics vendor limiting the amount of
physical memory that can be registered. You should investigate the
relevant Linux kernel module parameters that control how much physical
memory can be registered, and increase them to allow registering all
physical memory on your machine.
See this Open MPI FAQ item for more information on these Linux kernel module
parameters:
http://www.open-mpi.org/faq/?category=openfabrics#ib-..
Local host: node107
Registerable memory: 32768 MiB
Total memory: 65459 MiB
Your MPI job will continue, but may be behave poorly and/or hang.
--------------------------------------------------------------------------
hello from 0
hello from 1
[node107:48993] 1 more process has sent help message help-mpi- btl-openib.txt / reg mem limit low
[node107:48993] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Other installed soft (Intel MPI library) work fine, without any errors and using all 64GB memory.
For OpenMPI I don't use any PBS manager (Torque, slurm, etc.), I work on single node. I get to the node by command
ssh node107
For command
cat /etc/security/limits.conf
I get next output:
...
* soft rss 2000000
* soft stack 2000000
* hard stack unlimited
* soft data unlimited
* hard data unlimited
* soft memlock unlimited
* hard memlock unlimited
* soft nproc 10000
* hard nproc 10000
* soft nofile 10000
* hard nofile 10000
* hard cpu unlimited
* soft cpu unlimited
...
For command
cat /sys/module/mlx4_core/parameters/log_num_mtt
I get output:
0
Command:
cat /sys/module/mlx4_core/parameters/log_mtts_per_seg
output:
3
Command:
getconf PAGESIZE
output:
4096
With this params and by formula
max_reg_mem = (2^log_num_mtt) * (2^log_mtts_per_seg) * PAGE_SIZE
max_reg_mem = 32768 bytes, nor 32GB, how specified in openmpi warning.
What is the reason for this? Can openmpi don't use Mellanox and params log_num_mtt, log_mtts_per_seg? How I can configure OpenFabrics to use all 64GB memory?
I solve this problem by installing newest version of OpenMPI (2.0.2).
In /etc/modprobe.d/mlx4_core.conf, put the following module parameter:
options mlx4_core log_mtts_per_seg=5
Reload the mlx4_core module:
rmmod mlx4_ib;
rmmod mlx4_core;
modprobe mlx4_ib
Check if log_mtts_per_seg shows up as configured above:
cat /sys/module/mlx4_core/parameters/log_mtts_per_seg

Why * in /etc/security/limits.conf doesn't include root user?

I'm running a java program as root on a linux machine. In order to increase the Max open files limitation, I added the following lines into /etc/security/limits.conf
* soft nofile 1000000
* hard nofile 1000000
However when I check the running program by cat /proc/<pid>/limits, it still tells me that the Max open files is 65536. Until I added another two lines into /etc/security/limits.conf, the Max open files could be changed to 1000000
root soft nofile 1000000
root hard nofile 1000000
I can see the comments from limits.conf, it says that
the wildcard *, for default entry.
So when I use * as default entry, doesn't it include root user? Why?
Correct, it doesn't include root user. Looks like it has been done by design. From
man 5 limits.conf
NOTE: group and wildcard limits are not applied to the root user. To
set a limit for the root user, this field must contain the literal
username root.

Changing ulimit on Amazon EC2

Is there a way to increase the ulimit -n (open files) for an Amazon EC2 instance? I am running an amazon m3.2xlarge ubuntu instance for testing purposes. The ulimit -Hn is 4096 but I need over 10k. I have even tried temporarily getting higher instance types but no luck.
I have googled around for quite a while but there are only topics on this that are a few years old. Most suggest changing the limits.conf file found in /etc/security/limits.conf but the limit is read only so I cannot change permissions.
Are there any alternative ways to change this ?
Edit - my etc/security/limits.conf file
#* soft core 0
#root hard core 100000
root soft nofile 16500
root hard nofile 16500
#* hard rss 10000
##student hard nproc 20
##faculty soft nproc 20
##faculty hard nproc 50
#ftp hard nproc 0
#ftp - chroot /ftp
##student - maxlogins 4
I ran into this issue recently and resulted in editing /etc/security/limits.conf in my UserData script. It ended up looking something like this:
#!/bin/bash
echo "* hard nofile 65535" | tee --append /etc/security/limits.conf
echo "* soft nofile 65535" | tee --append /etc/security/limits.conf
Note that the 'user' for the limits.conf is *, meaning all users will have 65535 as their limit.
If you set up the instance or you have sudo privileges, you can change those configuration files. Just prepend your command to open the file with sudo, (example sudo vi /etc/security/limits.conf).

How to increase Neo4j's maximum file open limit (ulimit) in Ubuntu?

Currently ulimit -n shows 10000. I want to increase it to 40000. I've edited "/etc/sysctl.conf" and put fs.file-max=40000. I've also edited /etc/security/limits.conf and updated hard and soft values. But still ulimit shows 10000. After making all these changes I rebooted my laptop. I've access to root password.
usr_name#usr_name-lap:/etc$ /sbin/sysctl fs.file-max
fs.file-max = 500000
Added following lines in /etc/security/limits.conf -
* soft nofile 40000
* hard nofile 40000
I also added following line in /etc/pam.d/su-
session    required   pam_limits.so
I've tried every possible way as given on other forums, but I can reach up to a maximum limit of 10000, not beyond that. What can be the issue?
I'm making this change because neo4j throws maximum open file limits reached error.
What you are doing will not work for root user. Maybe you are running your services as root and hence you don't get to see the change.
To increase the ulimit for root user you should replace the * by root. * does not apply for root user. Rest is the same as you did. I will re-quote it here.
Add the following lines to the file: /etc/security/limits.conf
root soft nofile 40000
root hard nofile 40000
And then add following line in the file: /etc/pam.d/common-session
session required pam_limits.so
This will update the ulimit for root user. As mentioned in comments, you may don't even have to reboot to see the change.
1) Check sysctl file-max limit:
$ cat /proc/sys/fs/file-max
If the limit is lower than your desired value, open the sysctl.conf and add this line at the end of file:
fs.file-max = 65536
Finally, apply sysctl limits:
$ sysctl -p
2) Edit /etc/security/limits.conf and add below the mentioned
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
These limits won't apply for root user, if you want to change root limits you have to do that explicitly:
root soft nofile 65535
root hard nofile 65535
...
3) Reboot system or add following line to the end of /etc/pam.d/common-session:
session required pam_limits.so
Logout and login again.
4) Check soft limits:
$ ulimit -a
and hard limits:
$ ulimit -Ha
....
open files (-n) 65535
Reference : http://ithubinfo.blogspot.in/2013/07/how-to-increase-ulimit-open-file-and.html
I am using Debian but this solution should work fine with Ubuntu.
You have to add a line in the neo4j-service script.
Here is what I have done :
nano /etc/init.d/neo4j-serviceAdd « ulimit –n 40000 » just before the start-stop-daemon line in the do_start section
Note that I am using version 2.0 Enterprise edition.
Hope this will help you.
I was having the same issue, and got it to work by adding entries to /etc/security/limits.d/90-somefile.conf. Note that in order to see the limits working, I had to log out completely from the ssh session, and then log back in.
I wanted to set the limit for a specific user that runs a service, but it seems that I was getting the limit that was set for the user I was logging in as. Here's an example to show how the ulimit is set based on authenticated user, and not the effective user:
$ sudo cat /etc/security/limits.d/90-nofiles.conf
loginuser soft nofile 10240
loginuser hard nofile 10240
root soft nofile 10241
root hard nofile 10241
serviceuser soft nofile 10242
serviceuser hard nofile 10242
$ whoami
loginuser
$ ulimit -n
10240
$ sudo -i
# ulimit -n
10240 # loginuser's limit
# su - serviceuser
$ ulimit -n
10240 # still loginuser's limit.
You can use an * to specify an increase for all users. If I restart the service as the user I logged in, and add ulimit -n to the init script, I see that the initial login user's limits are in place. I have not had a chance to verify which user's limits are used during a system boot or of determining what the actual nofile limit is of the service I am running (which is started with start-stop-daemon).
There's 2 approaches that are working for now:
add a ulimit adjustment to the init script, just before start-stop-daemon.
wildcard or more extensive ulimit settings in the security file.
You could alter the init script for neo4j to do a ulimit -n 40000 before running neo4j.
However, I can't help but feel you are barking up the wrong tree. Does neo4j legitimately need more than 10,000 open file descriptors? This sounds very much like a bug in neo4j or the way you are using it. I would try to address that.
I have lots of trouble getting this to work.
Using the following allows you to update it regardless of your user permission.
sudo sysctl -w fs.inotify.max_user_watches=100000
Edit
Just saw this from another user also on another stackexchange site (both work, but this version permanently updates the system setting, rather than temporarily):
echo fs.inotify.max_user_watches=100000 | sudo tee -a /etc/sysctl.conf;
sudo sysctl -p
Try run this command it will create a *_limits.conf file under /etc/security/limits.d
echo "* soft nofile 102400" > /etc/security/limits.d/*_limits.conf && echo "* hard nofile 102400" >> /etc/security/limits.d/*_limits.conf
Just exit from terminal and login again and verify by ulimit -n it will set for * users
tl;dr set both the soft and hard limits
I'm sure it's working as intended but I'll add it here just in case.
For completeness the limit is set here (see below for syntax):
/etc/security/limits.conf
some_user soft nofile 60000
some_user hard nofile 60000
and activated with the following in /etc/pam.d/common-session:
session required pam_limits.so
If you set only the hard limit, ulimit -a will show the default (1024):
If you set only the soft the limit ulimit -a will show (4096)
If you set them both ulimit -a will show the soft limit (up to the hard limit of course)
I did it like this
echo "NEO4J_ULIMIT_NOFILE=50000" >> neo4j
mv neo4j /etc/default/
ULIMIT configuration:
Login by root
vi security/limits.conf
Make Below entry
Ulimit configuration start for website user
website soft nofile 8192
website hard nofile 8192
website soft nproc 4096
website hard nproc 8192
website soft core unlimited
website hard core unlimited
Make Below entry for ALL USER
Ulimit configuration for every user
* soft nofile 8192
* hard nofile 8192
* soft nproc 4096
* hard nproc 8192
* soft core unlimited
* hard core unlimited
After modifying the file, user need to logoff and login again to see the new values.

Resources