Ambari BigInsights Kafka Not Starting - linux

Well it seems I've hit my first issue with my BigInsights Image, not a massive problem, but something to think about. On my Ambari browser services page it was showing that the Kafka service was not running, I tried a restart a number of times, but this seemed to continuously fail. So I figured that I best look into it a bit further. In this case the issue was on the Ambari Master server which has the most services running on it.
So first call of action is to see if maybe Ambari is not making the call correctly:
[root#master ~]# kafka
Usage: /usr/bin/kafka {start|stop|status|clean}
[root#master ~]# kafka status
Kafka is not running.
[root#master ~]# kafka start
Starting Kafka succeeded with PID=15815.
[root#master ~]# kafka status
Kafka is not running.
Next I tired a clean start, not that I figured it would make much difference, but maybe there was a issue with the logs not allowing it to restart:
[root#master ~]# kafka clean
Removed the Kafka PID file: /var/run/kafka/kafka.pid.
Removed the Kafka OUT file: /var/log/kafka/kafka.out.
Removed the Kafka ERR file: /var/log/kafka/kafka.err.
[root#master ~]# kafka status
Kafka is not running. No pid file found.
[root#master ~]# kafka start
Starting Kafka succeeded with PID=15875.
[root#master-01 ~]# kafka status
Kafka is not running.

So lets take a proper look at the logs:
[root#master ~]# ls -ltr /var/log/kafka/
-<cut>-
-rw-r--r-- 1 kafka hadoop 6588 Aug 11 13:55 controller.log.2015-08-11-13
-rw-r--r-- 1 kafka hadoop 6000 Aug 11 13:59 server.log.2015-08-11-13
-rw-r--r-- 1 kafka hadoop 6588 Aug 11 14:55 controller.log
-rw-r--r-- 1 kafka hadoop 5700 Aug 11 14:56 server.log
-rw-r--r-- 1 root root 284 Aug 11 15:09 kafka.err
-rw-r--r-- 1 root root 522 Aug 11 15:09 kafka.out
-rw-r--r-- 1 kafka hadoop 707 Aug 11 15:09 kafkaServer-gc.log
Lets look at the error and out files:
[root#master ~]# cat /var/log/kafka/kafka.err
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)
[root#master ~]# cat /var/log/kafka/kafka.out
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /root/hs_err_pid15875.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /root/hs_err_pid16305.log
Ah, that's odd, as I asked for at least 4GB of memory for my VMs, lets check:
[root#master ~]# cat /proc/meminfo
MemTotal: 1922260 kB
MemFree: 278404 kB
Buffers: 8600 kB
Cached: 43384 kB
Best get some more memory allocated!
Normally the minimum that you should install BigInsights with, as recommended by the IBM support pages is 8GB, so this gives you rather a insight into why. At least 2GB of it is just to run the installed services on the system, even before you start loading the DB and running queries.

Related

Apache - High CPU usage after upgrading to Amazon Linux 2

Amazon Linux 1:
Server version: Apache/2.4.54 (Amazon)
Server built: Jul 11 2022 21:47:38
Server's Module Magic Number: 20120211:124
Server loaded: APR 1.6.3, APR-UTIL 1.5.4, PCRE 8.21 2011-12-12
PHP 7.3.30 (cli) (built: Oct 6 2021 20:34:22) ( NTS )
Amazon Linux 2:
Server version: Apache/2.4.54 ()
Server built: Jun 30 2022 11:02:23
Server's Module Magic Number: 20120211:124
Server loaded: APR 1.7.0, APR-UTIL 1.6.1, PCRE 8.32 2012-11-30
PHP 7.4.30 (cli) (built: Jun 23 2022 20:19:00) ( NTS )
The server's are configured via automation and loaded into ALB/ASG's.
Instance size is m4.large (2x vCPU, 8GiB Memory)
Auto-Scaling group is configured with Min:4 Max:8
This is what my httpd.conf file looks like:
<IfModule prefork.c>
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 256
MaxClients 256
MaxConnectionsPerChild 4000
</IfModule>
On Amazon Linux 1, the site works as expected. The ASG spawns 4 instances and they each hover at 40-60% CPU utilization during peak hours.
The same build on Amazon Linux 2 yields wildly different results. All instances immediately get bombarded by a huge number of httpd processes.
The ASG scales up to 8 instances with every single one at 90%+ CPU usage. The instances start to lock up which causes the target group to mark them as "unhealthy" and the ASG to rotate them out endlessly. The website obviously does not work.
What could be causing them to behave so differently? What steps can I take to try and mitigate this? I'm honestly pretty new to all this so I don't know where to start.

Docker Devmapper space issue - increase size

I have the same issue as in space issue on docker devmapper and CentOS7
It only specifies to clean up but not how I can increase the space and I dont have any images to clean. I tried several things with dm.min_free_space but nothing worked and want to increase the space.
OS Version/build: Red Hat Enterprise Linux Server release 7.3 (Maipo)
App version:
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-11.el7.centos.x86_64
Go version: go1.7.4
Git commit: 96d83a5/1.12.6
Built: Tue Mar 7 09:23:34 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-11.el7.centos.x86_64
Go version: go1.7.4
Git commit: 96d83a5/1.12.6
Built: Tue Mar 7 09:23:34 2017
OS/Arch: linux/amd64
Steps to reproduce
I have no containers running currently and have some docker images pertaining to Kubernetes which will be used by the Kubernetes service.
sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[kubeuser4#kubenode4 Employee]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/busybox latest 00f017a8c2a6 5 days ago 1.11 MB
registry.access.redhat.com/rhel7/pod-infrastructure latest 34d3450d733b 6 weeks ago 205 MB
docker.io/java 8 d23bdf5b1b1b 8 weeks ago 643.1 MB
gcr.io/google_containers/heapster_grafana v2.6.0-2 b43443930626 12 months ago 230 MB
When I try to create a docker image of my application that needs to be used, I get the below error.
devmapper: Thin Pool has 8783 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior
I tried the cleaning up as mentioned in the other forums, but not helped much and getting the same error. When I tried to run with this sudo docker --storage-opt dm.min_free_space=0%, seems like it starts as a daemon, but still it failed with another error "docker-runc not installed on system" and also I dont want to run it as a daemon.
Below are some command outputs
sudo dmsetup status
localvg00-lv_home: 0 20971520 linear
localvg00-lv_home: 20971520 20971520 linear
docker-251:5-134039-pool: 0 209715200 thin-pool 924 848/524288 1629226/1638400 - rw discard_passdown queue_if_no_space
localvg00-lv_tmp: 0 4194304 linear
localvg00-lv_swap: 0 8388608 linear
localvg00-lv_root: 0 2097152 linear
localvg00-lv_root: 2097152 20971520 linear
localvg00-lv_usr: 0 16777216 linear
localvg00-lv_var: 0 8388608 linear
localvg00-lv_var: 8388608 62914560 linear
sudo docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 4
Server Version: 1.12.6
Storage Driver: devicemapper
Pool Name: docker-251:5-134039-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 106.8 GB
Data Space Total: 107.4 GB
Data Space Available: 601.2 MB
Metadata Space Used: 3.473 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.144 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.135-RHEL7 (2016-11-16)
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
Volume: local
Network: overlay null bridge host
Swarm: inactive
Runtimes: runc docker-runc
Default Runtime: docker-runc
Security Options: seccomp
Kernel Version: 4.1.12-61.1.28.el7uek.x86_64
Operating System: Oracle Linux Server 7.3
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 2
CPUs: 2
Total Memory: 7.545 GiB
Name: kubenode4
I had also tried increasing all the physical volume size and logical volume size(lv_var) on my linux machine, but still it doesnt work.
sudo lvs
[sudo] password for kubeuser4:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_home localvg00 -wi-ao---- 20.00g
lv_root localvg00 -wi-ao---- 11.00g
lv_swap localvg00 -wi-ao---- 4.00g
lv_tmp localvg00 -wi-ao---- 2.00g
lv_usr localvg00 -wi-ao---- 8.00g
lv_var localvg00 -wi-ao---- 34.00g
sudo ls -lsh /var/lib/docker/devicemapper/devicemapper/data
2.3G -rw------- 1 root root 100G Mar 14 22:16 /var/lib/docker/devicemapper/devicemapper/data
Someone please let me know how it can be done.
Thanks,
It is better move away from devicemapper for a few reasons.
devicemapper in loopback unrecoverable storage issue: https://github.com/docker/docker/issues/3182 "devicemapper not recommended for production use".
I found it easy enough to switch to overlay storage driver, YMMV of course but hopefully not too much. 'rm -rf /var/lib/docker' is somewhat optional when switching but easy and I would highly recommend it as long as you can load your images back in. http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/
systemctl stop docker
rm -rf /var/lib/docker
# if these files do not already exist . . . create them, otherwise you need to edit by hand, you can also just add -s overlay in the systemctl docker script
ls /etc/sysconfig/docker /etc/sysconfig/docker-storage
[[ $? != 0 ]] && {
echo OPTIONS='--selinux-enabled=false' > /etc/sysconfig/docker
echo "DOCKER_STORAGE_OPTIONS= -s overlay" > /etc/sysconfig/docker-storage
}
systemctl start docker
systemctl status docker
docker images
more reading:
https://docs.docker.com/engine/userguide/storagedriver/selectadriver/
https://integratedcode.us/2016/08/30/storage-drivers-in-docker-a-deep-dive/
Was able to get it working and have mentioned it in
https://forums.docker.com/t/devmapper-space-issue/29786/3

Cassandra keyspace fails when using symbolic link

Need: create keyspace on alternate device
Problem: service aborts on startup with dir-create failure messages below.
INFO [main] 2017-01-06 00:45:03,300 ViewManager.java:137 - Not submitting build tasks for views in keyspace system_schema as storage service is not initialized
ERROR [main] 2017-01-06 00:45:03,393 Directories.java:239 - Failed to create /var/lib/cassandra/data/opus/aa-15be7240d3db11e6ad0eed0a1d791016 directory
ERROR [main] 2017-01-06 00:45:03,397 DefaultFSErrorHandler.java:92 - Exiting forcefully due to file system exception on startup, disk failure policy "stop"
Context: Cassandra 3.9 single-node ubuntu 16.04; directory perms are below.
01:52 opus/ cd /var/lib/cassandra/data
01:52 opus/ ls -l
total 24
drwxr-xr-x 3 cassandra cassandra 4096 Jan 6 00:41 opus
drwxr-xr-x 24 cassandra cassandra 4096 Jan 5 23:49 system
drwxr-xr-x 6 cassandra cassandra 4096 Jan 5 23:50 system_auth
drwxr-xr-x 5 cassandra cassandra 4096 Jan 5 23:50 system_distributed
drwxr-xr-x 12 cassandra cassandra 4096 Jan 5 23:50 system_schema
drwxr-xr-x 4 cassandra cassandra 4096 Jan 5 23:50 system_traces
01:52 opus/ cd opus
01:52 opus/ ls -l
total 4
drwxr-xr-x 3 cassandra cassandra 4096 Jan 6 00:41 aa-15be7240d3db11e6ad0eed0a1d791016
when the link is installed
01:57 data/ ls -l
total 20
lrwxrwxrwx 1 root root 35 Jan 6 01:57 opus -> /media/opus/quantdrive/opus
Steps:
Vanilla install of cassandra 3.9;
Create keyspace in cqlsh create keyspace opus with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
Create table use opus; create table aa(aa int, primary key(aa));
Stop cassandra
Move keyspace dir mv /var/lib/cassandra/data/opus /media/opus/quantdrive
Create symbolic link ln -s /media/opus/quantdrive/opus /var/lib/cassandra/opus
Start cassandra [FAILS AS ABOVE] with create directory, when directory already present
No change in perms on opus keyspace directory, I just moved it. When I move it back, cassandra starts fine.
I would be grateful for any help with this and I apologize in advance if I the solution to my problem is described elsewhere or if I'm missing the obvious.
Move the mount point for the target drive from a user-owned directory to a root-owned one. I moved the mount-point in my case from /media/opus/quantdrive which is owned by user opus to /mnt/quantdrive which is owned by root and everything worked fine.

Elasticsearch connection error in Ubuntu 16.4

In my ubuntu machine when I run the command curl -X GET 'http://localhost:9200' to test connection it show following message.
curl: (7) Failed to connect to localhost port 9200: Connection refused
When i check server status with sudo systemctl start elasticsearch it show following message.
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2016-11-20 16:32:30 BDT; 44s ago
Docs: http://www.elastic.co
Process: 8653 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefa
Process: 8649 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 8653 (code=exited, status=1/FAILURE)
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,579 main ERROR Null object returned for RollingFile in Appenders.
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,579 main ERROR Null object returned for RollingFile in Appenders.
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,580 main ERROR Unable to locate appender "rolling" for logger config "root"
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,580 main ERROR Unable to locate appender "index_indexing_slowlog_rolling" for logge
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,581 main ERROR Unable to locate appender "index_search_slowlog_rolling" for logger
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,581 main ERROR Unable to locate appender "deprecation_rolling" for logger config "o
Nov 20 16:32:29 bahar elasticsearch[8653]: [2016-11-20T16:32:25,592][WARN ][o.e.c.l.LogConfigurator ] ignoring unsupported logging configuration
Nov 20 16:32:30 bahar systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Nov 20 16:32:30 bahar systemd[1]: elasticsearch.service: Unit entered failed state.
Nov 20 16:32:30 bahar systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
This is the error for the PATH and LOgs in the elasticsearch.yml (etc/elasticsearch/elasticsearch.yml)
Uncheck these path and your error will be removed.
That means elasticsearch is not running. And from what I see, there is a problem with starting it. Check your elasticsearch configuration.
check if Elasticsearch is running,run the follwing command:
$ ps aux|grep elasticsearch
if Elasticsearch is not started,check your JAVA Environment,download a new Elasticsearch and install it again:
1.check if JAVA is correctly installed:
$ java -version
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
if your JAVA version is lower 1.7,change a new one.
2.download Elasticsearch install package,unzip it:
$ tar -zxvf elasticsearch-2.3.3.gz
3. run Elasticsearch
$ cd elasticsearch-2.3.3
$ ./bin/elasticsearch
Usually it's the write permission issue for the log directory (default as /var/log/elasticsearch), use ls -l to check the permission and change mode to 777 for the log directory and files if necessary.
Long story short: a system reboot might get it OK.
It has been a while since the question is asked. Anyway, I ran into a similar problem recently.
The elasticsearch service on one of my nodes died, with error saying similar to those posted in the question when restart the service. It says the log folder to write is read-only file system. But these files and directories are indeed owned by user elasticsearch (version 5.5, deployed on Cent OS 6.5), there should not be a read-only problem.
I checked and didn't find a clue. So, I just reboot the system. After rebooting, everything goes all right without any further tuning: elasticsearch service starts on boot as configured, it finds the cluster and all the other nodes, and the cluster health status turns green after a little while.
I guess, the root reason might be some hardware failure in my case. All data and logs managed by elasticsearch cluster are stored in a 2TB SSD driver mounted on each node. And our hardware team just managed to recover from an external storage failure recently. All the nodes restarted during that recovery. Chances are there are some lagged issues caused the problem.

Cassandra won't start in linux as a service

I have a debian linux image running on Google compute. Can successfully get cassandra working with "sudo cassandra" or "sudo cassandra -f" but then as soon as I log off this stops working. But when I try to run this as a service it simply doesnt say anything and doesnt start it either! I installed it using the aptget package v2.1.
I've tried sudo service cassandra start. It looks like its doing something and then quits without any logs.
Please help me run this up as a service. I can't even locate where the logs are stored when I run it as a service.
I ran into this issue recently, and as BrianC indicated it can be an out of memory condition. In my case I could successfully start cassandra with sudo cassandra -f but not with /etc/init.d/cassandra start.
For me, the last log entry in /var/log/cassandra/system.log when starting as a service was:
INFO [main] 2015-04-30 10:58:16,234 CassandraDaemon.java (line 248) Classpath: /etc/cassandra:/usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.3.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/guava-15.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jline-1.0.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.1.jar:/usr/share/cassandra/lib/log4j-1.2.16.jar:/usr/share/cassandra/lib/lz4-1.2.0.jar:/usr/share/cassandra/lib/metrics-core-2.2.0.jar:/usr/share/cassandra/lib/netty-3.6.6.Final.jar:/usr/share/cassandra/lib/reporter-config-2.1.0.jar:/usr/share/cassandra/lib/servlet-api-2.5-20081211.jar:/usr/share/cassandra/lib/slf4j-api-1.7.2.jar:/usr/share/cassandra/lib/slf4j-log4j12-1.7.2.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.0.5.jar:/usr/share/cassandra/lib/snaptree-0.1.jar:/usr/share/cassandra/lib/super-csv-2.1.0.jar:/usr/share/cassandra/lib/thrift-server-0.3.7.jar:/usr/share/cassandra/apache-cassandra-2.0.14.jar:/usr/share/cassandra/apache-cassandra-thrift-2.0.14.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/stress.jar:/usr/share/java/jna.jar::/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar
And nothing afterwards. If it is a memory problem you should be able to verify this in your syslog. If if contains something like:
Apr 30 10:53:39 dev kernel: [1173246.957818] Out of memory: Kill process 8229 (java) score 132 or sacrifice child
Apr 30 10:53:39 dev kernel: [1173246.957831] Killed process 8229 (java) total-vm:634084kB, anon-rss:286772kB, file-rss:12676kB
Increase your ram. In my case I increased it to 2GB and it started fine.

Resources