Accumulo IOExceptions in internalRead - accumulo

I am trying to query an accumulo table using geomesa, everything works
fine, I get the correct result of the query but getting warnings
logged about "Error closing output stream". Here is sample log:
[WARN] 2017-05-04 13:00:00 TIOStreamTransport:112 - Error closing output stream.
java.io.IOException: The stream is closed
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at org.apache.thrift.transport.TIOStreamTransport.close(TIOStreamTransport.java:110)
at org.apache.thrift.transport.TFramedTransport.close(TFramedTransport.java:89)
at org.apache.accumulo.core.client.impl.ThriftTransportPool$CachedTTransport.close(ThriftTransportPool.java:309)
at org.apache.accumulo.core.client.impl.ThriftTransportPool.returnTransport(ThriftTransportPool.java:571)
at org.apache.accumulo.core.rpc.ThriftUtil.returnClient(ThriftUtil.java:151)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:710)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:353)
at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:745)
I am also getting logs on accumulo web interface:
Got an IOException in internalRead!
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.thrift.transport.TNonblockingSocket.read(TNonblockingSocket.java:142)
at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.internalRead(AbstractNonblockingServer.java:539)
at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.read(AbstractNonblockingServer.java:338)
at org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleRead(AbstractNonblockingServer.java:203)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:203)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.run(TNonblockingServer.java:154)
I searched regarding the same but found that it's saying something
related to overloading your clusters(which I don't think so happening). How to solve this?
Any help and suggestions are welcomed.

Found an open bug for it on jira refer this
For temporary workaround, we can disable this warning by adding the following logger tag in conf/generic_logger.xml of accumulo
<logger name="org.apache.accumulo.server.util.TServerUtils$THsHaServer">
<level value="ERROR"/>
</logger>

Related

Apache Spark Cluster\Windows: Getting Connection refused: no further information from worker node

I need help please getting spark working in windows cluster of 3 nodes. I am able to download and configure and run the master node and worker nodes. the worker nodes are registered successfully with the master. Im able to see both worker nodes in the Master UI. When I try to submit a job using:
spark-submit --master spark://IP:7077 hello_world.py
Spark continuously try to start multiple executors but the all failed with code exit 1 and it doesnt stop until I kill it. when I check the log in the UI for each worker Im seeing the following error:
Using Spark's default log4j profile: org/apache/spark/log4j2-defaults.properties
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1894)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:61)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:424)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:413)
at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:102)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$9(CoarseGrainedExecutorBackend.scala:444)
at scala.runtime.java8.JFunction1$mcVI$sp.apply(JFunction1$mcVI$sp.java:23)
at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:985)
at scala.collection.immutable.Range.foreach(Range.scala:158)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:984)
at org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$7(CoarseGrainedExecutorBackend.scala:442)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:62)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
... 4 more
**Caused by: java.io.IOException: Failed to connect to <Master DNS>/<Master IP>:56785
at **org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:288)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:218)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:230)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: <Master DNS>/<Master IP>:56785
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:710)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Im using Spark: spark-3.3.1-bin-hadoop3
Please help.
Thanks
The application to run

What is the cause of java.io.IOException and java.io.EOFException errors contantly after starting WSo2 Manager 3.2.0?

I am getting the following errors in my Wso2 API Manager and Gateway node logs after starting them up:
TID: [-1] [] [2021-07-06 07:50:51,388] ERROR {org.wso2.carbon.databridge.receiver.binary.internal.BinaryDataReceiver} - Error while reading from the socket. java.io.EOFException: Connection closed from remote end.
at org.wso2.carbon.databridge.commons.binary.BinaryMessageConverterUtil.loadData(BinaryMessageConverterUtil.java:39)
at org.wso2.carbon.databridge.receiver.binary.internal.BinaryDataReceiver$BinaryTransportReceiver.run(BinaryDataReceiver.java:258)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
TID: [-1] [] [2021-07-06 07:50:52,753] ERROR {org.wso2.andes.transport.network.mina.MinaNetworkHandler} - Exception caught by Mina java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.wso2.org.apache.mina.transport.socket.nio.SocketIoProcessor.read(SocketIoProcessor.java:218)
at org.wso2.org.apache.mina.transport.socket.nio.SocketIoProcessor.process(SocketIoProcessor.java:198)
at org.wso2.org.apache.mina.transport.socket.nio.SocketIoProcessor.access$400(SocketIoProcessor.java:45)
at org.wso2.org.apache.mina.transport.socket.nio.SocketIoProcessor$Worker.run(SocketIoProcessor.java:485)
at org.wso2.org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:51)
at java.lang.Thread.run(Thread.java:748)
TID: [-1] [] [2021-07-06 07:50:52,757] ERROR {org.wso2.andes.server.protocol.MultiVersionProtocolEngine} - Error establishing session java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.wso2.org.apache.mina.transport.socket.nio.SocketIoProcessor.read(SocketIoProcessor.java:218)
at org.wso2.org.apache.mina.transport.socket.nio.SocketIoProcessor.process(SocketIoProcessor.java:198)
at org.wso2.org.apache.mina.transport.socket.nio.SocketIoProcessor.access$400(SocketIoProcessor.java:45)
at org.wso2.org.apache.mina.transport.socket.nio.SocketIoProcessor$Worker.run(SocketIoProcessor.java:485)
at org.wso2.org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:51)
at java.lang.Thread.run(Thread.java:748)
I get this error when starting it with the default deployment.toml and after I have configured it to my deployment needs. It does not seem to affect the functionality of creating/publishing API's and application and subscribing them as well as generating keys so I am not sure what the issue is.
I am currently running this API Manager in a EC2 instance on AWS. If theres any other info needed to help find out why this is happening please let me know.
Thanks

Repeated AccessControlException: Missing permission ("java.lang.RuntimePermission" "exitVM.1") Errors in Datastax 5.0.1 spark node

I am seeing the following error repeatedly in the logs of several of my Datastax nodes. I haven't been able to find any references for what is causing it, how to prevent it. Why is it trying to exitVM?
ERROR [dispatcher-event-loop-3] 2017-04-13 10:39:47,540 Logging.scala:74 - All masters are unresponsive! Giving up.
ERROR [dispatcher-event-loop-3] 2017-04-13 10:39:47,540 Logging.scala:95 - Uncaught exception in thread Thread[dispatcher-event-loop-3,5,main]
java.security.AccessControlException: Missing permission ("java.lang.RuntimePermission" "exitVM.1")
at com.datastax.bdp.util.DseJavaSecurityManager.verify(DseJavaSecurityManager.java:36) ~[dse-core-5.0.1.jar:5.0.1]
at com.datastax.bdp.util.DseJavaSecurityManager.checkPermission(DseJavaSecurityManager.java:44) ~[dse-core-5.0.1.jar:5.0.1]
at java.lang.SecurityManager.checkExit(SecurityManager.java:761) ~[na:1.8.0_101]
at java.lang.Runtime.exit(Runtime.java:107) ~[na:1.8.0_101]
at java.lang.System.exit(System.java:971) ~[na:1.8.0_101]
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$reregisterWithMaster$1.apply$mcV$sp(Worker.scala:300) ~[spark-core_2.10-1.6.1.2.jar:5.0.1]
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163) ~[spark-core_2.10-1.6.1.2.jar:1.6.1.2]
at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$Worker$$reregisterWithMaster(Worker.scala:230) [spark-core_2.10-1.6.1.2.jar:5.0.1]
at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:539) [spark-core_2.10-1.6.1.2.jar:5.0.1]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:166) [scala-library-2.10.6.jar:na]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:166) [scala-library-2.10.6.jar:na]
at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:116) [spark-core_2.10-1.6.1.2.jar:1.6.1.2]
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:204) [spark-core_2.10-1.6.1.2.jar:1.6.1.2]
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100) [spark-core_2.10-1.6.1.2.jar:1.6.1.2]
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:215) [spark-core_2.10-1.6.1.2.jar:1.6.1.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
ERROR [dispatcher-event-loop-3] 2017-04-13 10:39:47,541 Logging.scala:95 - Ignoring error

Apache Kafka 0.8 log4j2.xml appender getting timeout error

I need to configure Kafka appender using log4j2.xml, the setup worked fine local, but server I am getting below error.
Local Kafka: kafka_2.11-0.10.2.0
Kafka Appender: 0.9.0.0
Local worked fine.
Server Kafka: 0.8
Kafka Appender: 0.9.0.0
On the server I got the following error:
2017-03-09 21:19:18,255 main ERROR Unable to write to Kafka [kafkaAppender] for appender [kafkaAppender]. java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:730)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:483)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:430)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:353)
at org.apache.logging.log4j.core.appender.mom.kafka.KafkaManager.send(KafkaManager.java:81)
at org.apache.logging.log4j.core.appender.mom.kafka.KafkaAppender.append(KafkaAppender.java:85)
at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:155)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:128)
at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:119)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:390)
at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:375)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:359)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:349)
at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)
at org.apache.logging.slf4j.Log4jLogger.log(Log4jLogger.java:376)
at org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
at org.springframework.boot.SpringApplication.logStartupProfileInfo(SpringApplication.java:665)
at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:353)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:313)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1186)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1175)
at com.charter.kafka.proxy.app.ProxyApplication.main(ProxyApplication.java:21)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
My bad, I wrote value for bootstrap.servers property with extra space in log4j2.xml, after removing the space, I could connect to Kafka. Thanks.
You are unable to get the metadata , check that the kafka cluster is up and running and that there is no connectivity issue

Cassandra version 2.1.3 java.lang.AssertionError: attempted to delete non-existing file system-compaction_history

I upgraded from Cassandra 2.1.2 to 2.1.3 following the upgrade instructions. However, I started to see this new java.lang.AssertionError. See log below.
Environment: Windows 7 (yes I konw...), Java 1.8.0_11
ERROR [NonPeriodicTasks:1] 2015-04-05 20:34:39,638 CassandraDaemon.java:167 - Exception in thread Thread[NonPeriodicTasks:1,5,main]
java.lang.AssertionError: attempted to delete non-existing file system-compaction_history-tmplink-ka-548-Data.db
at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:126) ~[apache-cassandra-2.1.3.jar:2.1.3]
at org.apache.cassandra.io.sstable.SSTableReader$Tidier$1.run(SSTableReader.java:2072) ~[apache-cassandra-2.1.3.jar:2.1.3]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_11]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_11]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[na:1.8.0_11]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[na:1.8.0_11]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_11]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_11]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_11]
Does anyone know what this error is about and how I can correct this? Or if there is already an open JIRA at Apache?
See open JIRA ticket at the link below on this issue. Basically this is potentially a known issue. bug has been filed.
https://issues.apache.org/jira/browse/CASSANDRA-9121

Resources