Dse opscenter is not starting Up - cassandra

I Have added the below lines in /usr/share/opscenter/bin/opscenter script:
OPSC_JVM_OPTS="-server -Xmx1024m -Xms1024m -XX:MaxPermSize=128m -Dpython.cachedir.skip=false
-XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled
-XX:+ScavengeBeforeFullGC -XX:+CMSScavengeBeforeRemark -verbose:gc -XX:+PrintGCDateStamps
-XX:+PrintGCDetails -XX:+PrintGCCause -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=1M -Xloggc:$OPSC_GC_LOG_PATH/gc.log
$OPSC_JVM_OPTS"
OPSC_JVM_OPTS="$OPSC_JVM_OPTS -Djava.io.tmpdir=/usr/share/opscenter/tmp"

Related

How to get One particular tomcat PID while several tomcat instances are running in Linux

I am using linux server where 3 instances of tomcat are running for 3 different applications.
While I'm running following command in the terminal,
ps -ef | grep tomcat
I'm getting 3 different PIDs.
12244 1 0 May27 ? 00:02:08 /opt/shs/zulu13.28.11-ca-jdk13.0.1-linux_x64/bin/java -Djava.util.logging.config.file=/app/shs/wag2/tomcat/server1/apache-tomcat-9.0.27/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dfile.encoding=UTF-8 -Xms1024m -Xmx4096m -XX:+UseParallelGC -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Dignore.endorsed.dirs= -classpath /app/shs/wag2/tomcat/server1/apache-tomcat-9.0.27/bin/bootstrap.jar:/app/shs/wag2/tomcat/server1/apache-tomcat-9.0.27/bin/tomcat-juli.jar -Dcatalina.base=/app/shs/wag2/tomcat/server1/apache-tomcat-9.0.27 -Dcatalina.home=/app/shs/wag2/tomcat/server1/apache-tomcat-9.0.27 -Djava.io.tmpdir=/app/shs/wag2/tomcat/server1/apache-tomcat-9.0.27/temp org.apache.catalina.startup.Bootstrap start
2687 1 2 May27 pts/3 00:01:00 /opt/shs/zulu13.28.11-ca-jdk13.0.1-linux_x64/bin/java -Djava.util.logging.config.file=/app/shs/wag1/tomcat/server1/apache-tomcat-9.0.27/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dfile.encoding=UTF-8 -Xms1024m -Xmx4096m -XX:+UseParallelGC -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Dignore.endorsed.dirs= -classpath /app/shs/wag1/tomcat/server1/apache-tomcat-9.0.27/bin/bootstrap.jar:/app/shs/wag1/tomcat/server1/apache-tomcat-9.0.27/bin/tomcat-juli.jar -Dcatalina.base=/app/shs/wag1/tomcat/server1/apache-tomcat-9.0.27 -Dcatalina.home=/app/shs/wag1/tomcat/server1/apache-tomcat-9.0.27 -Djava.io.tmpdir=/app/shs/wag1/tomcat/server1/apache-tomcat-9.0.27/temp org.apache.catalina.startup.Bootstrap start
29534 2 0 May27 ? 00:05:12 /opt/shs/zulu13.28.11-ca-jdk13.0.1-linux_x64/bin/java -Djava.util.logging.config.file=/app/shs/wag3/tomcat/server1/apache-tomcat-9.0.27/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dfile.encoding=UTF-8 -Xms1024m -Xmx4096m -XX:+UseParallelGC -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Dignore.endorsed.dirs= -classpath /app/shs/wag3/tomcat/server1/apache-tomcat-9.0.27/bin/bootstrap.jar:/app/shs/wag3/tomcat/server1/apache-tomcat-9.0.27/bin/tomcat-juli.jar -Dcatalina.base=/app/shs/wag3/tomcat/server1/apache-tomcat-9.0.27 -Dcatalina.home=/app/shs/wag3/tomcat/server1/apache-tomcat-9.0.27 -Djava.io.tmpdir=/app/shs/wag3/tomcat/server1/apache-tomcat-9.0.27/temp org.apache.catalina.startup.Bootstrap start
Now I'm not able to understand which PID I need to kill for a particular tomcat to restart.
Can you please help me to resolve this. Thanks.
Looks like three tomcat process running on the server. Which tomcat do you want to kill?
Below is the Pid of each tomcat:
/app/shs/wag2/tomcat/server1/apache-tomcat-9.0.27/ 12244
/app/shs/wag1/tomcat/server1/apache-tomcat-9.0.27/ 2687
/app/shs/wag3/tomcat/server1/apache-tomcat-9.0.27 29534
Even you can use jcmd command to print out the process ID of all the JVM processes.

Docker processes show via host ps -a? [duplicate]

From the host:
ps aux | grep java
me#my-host:~/elastic-search-group$ ps aux | grep java
smmsp 20473 106 6.3 4664740 257368 ? Ssl 17:48 0:09 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.4.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start
Then exec into the container:
docker exec -it 473 /bin/bash
And look at the processes:
root#473c4548b06f:/usr/share/elasticsearch# ps aux | grep java
elastic+ 1 14.0 6.3 4671936 257372 ? Ssl 17:48 0:10 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/sh
From the host:
sudo kill -9 20473
ends up killing the docker container.
Now, I may be mistaken, but I thought there was complete process segregation? Is this supposed to bleed out to the host?
The container is isolated from the host, the host is not isolated from the container. So from the host, you can see the files, network connections, network interfaces, processes, etc, that are used inside the container. But from the container, you can only see what's in the container (barring any privilege escalation configured in the run command).

Spark GraphX Out of memory error

I am running GraphX on Spark with input file size of around 100GB on aws EMR.
My cluster configuration is as follows
Nodes - 10
Memory - 122GB each
HDD - 320GB each
No matter what I do I'm getting out of memory error when I run spark job as
spark-submit --deploy-mode cluster \
--class com.news.ncg.report.graph.NcgGraphx \
ncgaka-graphx-assembly-1.0.jar true s3://<bkt>/<folder>/run=2016-08-19-02-06-20/part* output
Error
AM Container for appattempt_1474446853388_0001_000001 exited with exitCode: -104
For more detailed output, check application tracking page:http://ip-172-27-111-41.ap-southeast-2.compute.internal:8088/cluster/app/application_1474446853388_0001Then, click on links to logs of each attempt.
Diagnostics: Container [pid=7902,containerID=container_1474446853388_0001_01_000001] is running beyond physical memory limits. Current usage: 1.4 GB of 1.4 GB physical memory used; 3.4 GB of 6.9 GB virtual memory used. Killing container.
Dump of the process-tree for container_1474446853388_0001_01_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 7907 7902 7902 7902 (java) 36828 2081 3522265088 359788 /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1474446853388_0001/container_1474446853388_0001_01_000001/tmp -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1474446853388_0001/container_1474446853388_0001_01_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class com.news.ncg.report.graph.NcgGraphx --jar s3://discover-pixeltoucher/jar/ncgaka-graphx-assembly-1.0.jar --arg true --arg s3://discover-pixeltoucher/ncgus/run=2016-08-19-02-06-20/part* --arg s3://discover-pixeltoucher/output/20160819/ --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1474446853388_0001/container_1474446853388_0001_01_000001/__spark_conf__/__spark_conf__.properties
|- 7902 7900 7902 7902 (bash) 0 0 115810304 687 /bin/bash -c LD_LIBRARY_PATH=/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native::/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native:/usr/lib/hadoop/lib/native /usr/lib/jvm/java-openjdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/mnt/yarn/usercache/hadoop/appcache/application_1474446853388_0001/container_1474446853388_0001_01_000001/tmp '-XX:+UseConcMarkSweepGC' '-XX:CMSInitiatingOccupancyFraction=70' '-XX:MaxHeapFreeRatio=70' '-XX:+CMSClassUnloadingEnabled' '-XX:OnOutOfMemoryError=kill -9 %p' -Dspark.yarn.app.container.log.dir=/var/log/hadoop-yarn/containers/application_1474446853388_0001/container_1474446853388_0001_01_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'com.news.ncg.report.graph.NcgGraphx' --jar s3://discover-pixeltoucher/jar/ncgaka-graphx-assembly-1.0.jar --arg 'true' --arg 's3://discover-pixeltoucher/ncgus/run=2016-08-19-02-06-20/part*' --arg 's3://discover-pixeltoucher/output/20160819/' --properties-file /mnt/yarn/usercache/hadoop/appcache/application_1474446853388_0001/container_1474446853388_0001_01_000001/__spark_conf__/__spark_conf__.properties 1> /var/log/hadoop-yarn/containers/application_1474446853388_0001/container_1474446853388_0001_01_000001/stdout 2> /var/log/hadoop-yarn/containers/application_1474446853388_0001/container_1474446853388_0001_01_000001/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt
Any idea how can I stop getting this error?
I created sparkSession as below
val spark = SparkSession
.builder()
.master(mode)
.config("spark.hadoop.validateOutputSpecs", "false")
.config("spark.driver.cores", "1")
.config("spark.driver.memory", "30g")
.config("spark.executor.memory", "19g")
.config("spark.executor.cores", "5")
.config("spark.yarn.executor.memoryOverhead","2g")
.config("spark.yarn.driver.memoryOverhead ","1g")
.config("spark.shuffle.compress","true")
.config("spark.shuffle.service.enabled","true")
.config("spark.scheduler.mode","FAIR")
.config("spark.speculation","true")
.appName("NcgGraphX")
.getOrCreate()
It seems like you want to deploy your Spark application on YARN. If that is the case, you should not set up application properties in code, but rather using spark-submit:
$ ./bin/spark-submit --class com.news.ncg.report.graph.NcgGraphx \
--master yarn \
--deploy-mode cluster \
--driver-memory 30g \
--executor-memory 19g \
--executor-cores 5 \
<other options>
ncgaka-graphx-assembly-1.0.jar true s3://<bkt>/<folder>/run=2016-08-19-02-06-20/part* output
In client mode, the JVM would have been already set up, so I would personally use CLI to pass those options.
After passing memory options in spark-submit change your code to load variables dynamically: SparkSession.builder().getOrCreate()
PS. You might also want to increase memory for AM in spark.yarn.am.memory property.

When I start Cassandra server, two processes start running. Is this normal?

When I start Cassandra server, two processes start running.
Is this the normal behavior of Cassandra?
Below is the result set with two different process IDs:
5362 jsvc.exec -user cassandra -home /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/../ -pidfile /var/run/cassandra.pid -errfile &1 -outfile /var/log/cassandra/output.log -cp /usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra/lib/avro-1.4.0-fixes.jar:/usr/share/cassandra/lib/avro-1.4.0-sources-fixes.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang-2.4.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.3.jar:/usr/share/cassandra/lib/guava-r08.jar:/usr/share/cassandra/lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jline-0.9.94.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.7.0.jar:/usr/share/cassandra/lib/log4j-1.2.16.jar:/usr/share/cassandra/lib/metrics-core-2.0.3.jar:/usr/share/cassandra/lib/servlet-api-2.5-20081211.jar:/usr/share/cassandra/lib/slf4j-api-1.6.1.jar:/usr/share/cassandra/lib/slf4j-log4j12-1.6.1.jar:/usr/share/cassandra/lib/snakeyaml-1.6.jar:/usr/share/cassandra/lib/snappy-java-1.0.4.1.jar:/usr/share/cassandra/lib/snaptree-0.1.jar:/usr/share/cassandra/apache-cassandra-1.1.5.jar:/usr/share/cassandra/apache-cassandra-thrift-1.1.5.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/stress.jar:/usr/share/java/jna.jar:/etc/cassandra:/usr/share/java/commons-daemon.jar -Dlog4j.configuration=log4j-server.properties -XX:HeapDumpPath=/var/lib/cassandra/java_1351664352.hprof -XX:ErrorFile=/var/lib/cassandra/hs_err_1351664352.log -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M -Xmn256M -XX:+HeapDumpOnOutOfMemoryError -Xss160k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=7199 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false org.apache.cassandra.thrift.CassandraDaemon
5363 jsvc.exec -user cassandra -home /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/../ -pidfile /var/run/cassandra.pid -errfile &1 -outfile /var/log/cassandra/output.log -cp /usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra/lib/avro-1.4.0-fixes.jar:/usr/share/cassandra/lib/avro-1.4.0-sources-fixes.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang-2.4.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.3.jar:/usr/share/cassandra/lib/guava-r08.jar:/usr/share/cassandra/lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jline-0.9.94.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.7.0.jar:/usr/share/cassandra/lib/log4j-1.2.16.jar:/usr/share/cassandra/lib/metrics-core-2.0.3.jar:/usr/share/cassandra/lib/servlet-api-2.5-20081211.jar:/usr/share/cassandra/lib/slf4j-api-1.6.1.jar:/usr/share/cassandra/lib/slf4j-log4j12-1.6.1.jar:/usr/share/cassandra/lib/snakeyaml-1.6.jar:/usr/share/cassandra/lib/snappy-java-1.0.4.1.jar:/usr/share/cassandra/lib/snaptree-0.1.jar:/usr/share/cassandra/apache-cassandra-1.1.5.jar:/usr/share/cassandra/apache-cassandra-thrift-1.1.5.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/stress.jar:/usr/share/java/jna.jar:/etc/cassandra:/usr/share/java/commons-daemon.jar -Dlog4j.configuration=log4j-server.properties -XX:HeapDumpPath=/var/lib/cassandra/java_1351664352.hprof -XX:ErrorFile=/var/lib/cassandra/hs_err_1351664352.log -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M -Xmn256M -XX:+HeapDumpOnOutOfMemoryError -Xss160k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=7199 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false org.apache.cassandra.thrift.CassandraDaemon
Yes, this is normal. We use jsvc to daemonize cleanly.

What will be the priority parameter?

If you define your JAVA_OPTS as such
JAVA_OPTS: -Dprogram.name=run.sh -server -Xms2048m -Xmx4096 -server -Xms128m -Xmx512m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.net.preferIPv4Stack=true
What would be the priority -Xms and -Xmx parameter? The larger 1st one or the last one?
Thanks,

Resources