I have a Neo4J Enterprise cluster with 6 nodes (1 master and 5 slaves) hosted in separate Linux (CentOS 6.4) VMs that I am testing an application with. The VMs are hosted in Azure. I have an object that manages connections between all 6 nodes using a simple round-robin like technique. I noticed that when writing to slave nodes, the following error message occurs several times in the messages.log file:
ERROR [o.n.k.h.c.m.MasterServer]: Could not finish off dead channel
org.neo4j.kernel.ha.com.master.InvalidEpochException: Invalid epoch 282880438682249,
correct epoch is 282880443056723 at
org.neo4j.kernel.ha.com.master.MasterImpl.assertCorrectEpoch(MasterImpl.java:218) ~
[neo4j-ha-2.1.2.jar:2.1.2]
at org.neo4j.kernel.ha.com.master.MasterImpl.finishTransaction(MasterImpl.java:363)
~[neo4j-ha-2.1.2.jar:2.1.2]
at
org.neo4j.kernel.ha.com.master.MasterServer.finishOffChannel(MasterServer.java:70)
~[neo4j-ha-2.1.2.jar:2.1.2]
at org.neo4j.com.Server.tryToFinishOffChannel(Server.java:411) ~[neo4j-com 2.1.2.jar:2.1.2]
at org.neo4j.com.Server$4.run(Server.java:589) [neo4j-com-2.1.2.jar:2.1.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_60]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]
I am at a loss. Why does this error occur? When this error occurs, the write fails. I have the servers synchronizing their time via NTP. The application writing to Neo4J is a .Net application using the Neo4JClient library. Thank you for any guidance you can provide.
Amir.
The InvalidEpochException might occur when a master happen while a slave is in progress of committing a transaction. The main reason for unexpected master switches are lengthy GC pauses, longer than cluster timeout settings.
So you need to analyze GC behaviour and optimize them or tweak your cluster timeout settings.
Related
Our SDK version is Apache Beam Python 3.7 SDK 2.25.0
There is a pipeline which reads data from Pub/Sub, transforms it and saves results to GCS.
Usually it works fine for 1-2 weeks. After that it stucks.
"Operation ongoing in step s01 for at least 05m00s without outputting or completing in state process
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at org.apache.beam.runners.dataflow.worker.fn.data.RemoteGrpcPortWriteOperation.maybeWait(RemoteGrpcPortWriteOperation.java:175)
at org.apache.beam.runners.dataflow.worker.fn.data.RemoteGrpcPortWriteOperation.process(RemoteGrpcPortWriteOperation.java:196)
at org.apache.beam.runners.dataflow.worker.util.common.worker.OutputReceiver.process(OutputReceiver.java:49)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:201)
at org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159)
at org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
at org.apache.beam.runners.dataflow.worker.fn.control.BeamFnMapTaskExecutor.execute(BeamFnMapTaskExecutor.java:123)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1400)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1100(StreamingDataflowWorker.java:156)
at org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$7.run(StreamingDataflowWorker.java:1101)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Step 01 is just a "Read PubSub Messages" >> beam.io.ReadFromPubSub(subscription=subscription)
After this dataflow increases the number of workers and stops processing any new data. Job is still in RUNNNING state.
We just need to restart the job to solve it. But it happens every ~2 weeks.
How can we fix it?
This looks like an issue with the legacy "Java Runner Harness." I would suggest running your pipeline with Dataflow Runner v2 to avoid these kinds of issues. You could also wait until it becomes the default (it is currently rolling out).
I am trying to upgrade flink 1.7.2 to flink 1.10 and I am having problem with cassandra connector. Everytime I start a job that is using it the following exception is thrown:
com.datastax.driver.core.exceptions.TransportException: [/xx.xx.xx.xx] Error writing
at com.datastax.driver.core.Connection$10.operationComplete(Connection.java:550)
at com.datastax.driver.core.Connection$10.operationComplete(Connection.java:534)
at com.datastax.shaded.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at com.datastax.shaded.netty.util.concurrent.DefaultPromise.notifyLateListener(DefaultPromise.java:621)
at com.datastax.shaded.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:138)
at com.datastax.shaded.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:93)
at com.datastax.shaded.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:28)
at com.datastax.driver.core.Connection$Flusher.run(Connection.java:870)
at com.datastax.shaded.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
at com.datastax.shaded.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at com.datastax.shaded.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.datastax.shaded.netty.handler.codec.EncoderException: java.lang.OutOfMemoryError: Direct buffer memory
at com.datastax.shaded.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:107)
at com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:643)
Also the following message was printed when the job was run locally (not in YARN):
13:57:54,490 ERROR com.datastax.shaded.netty.util.ResourceLeakDetector - LEAK: You are creating too many HashedWheelTimer instances. HashedWheelTimer is a shared resource that must be reused across the JVM,so that only a few instances are created.
All jobs that do not use cassandra connector are working properly
Can someone help?
UPDATE: The bug is still reproducible and I think this is the reason: https://issues.apache.org/jira/browse/FLINK-17493.
I had an old configuration (from flink 1.7) where classloader.parent-first-patterns.additional: com.datastax. was configured and my cassadndra-flink connector was in flink/lib folder ( this was done because of other problems related to shaded netty I had with Cassandra-flink connector). Now with the migration to flink 1.10 the following problem was hit. Once removing this configuration - classloader.parent-first-patterns.additional: com.datastax., including flink-connector-cassandra_2.12-1.10.0.jar in my jar and removing it from /usr/lib/flink/lib/ the problem was no longer reproducible.
I've been using Spark/Hadoop on Dataproc for months both via Zeppelin and Dataproc console but just recently I got the following error.
Caused by: java.io.FileNotFoundException: /hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1530998908050_0001/blockmgr-9d6a2308-0d52-40f5-8ef3-0abce2083a9c/21/temp_shuffle_3f65e1ca-ba48-4cb0-a2ae-7a81dcdcf466 (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.apache.spark.storage.DiskBlockObjectWriter.initialize(DiskBlockObjectWriter.scala:103)
at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:116)
at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:237)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:151)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
First, I got this type of error on Zeppelin notebook and thought it was Zeppelin issue. This error however, seems to occur randomly. I suspect It has something to do with one of the Spark workers not being able to write in that path. So, I googled and was suggested to delete files under /hadoop/yarn/nm-local-dir/usercache/ on each Spark worker and check if there are available disk space on each worker. After doing so, I still sometimes had this error. I also ran a Spark job on Dataproc, this similar error also occurred. I'm on Dataproc image version 1.2.
thanks
Peeranat F.
Ok. We faced the same issue on GCP and the reason for this is resource preemption.
In GCP, resource preemption can be done by following two strategies,
Node preemption - removing nodes in cluster and replacing them
Container preemption - removing yarn containers.
This setting is done in GCP by your admin/ dev ops person to optimize cost and resource utilization of cluster, specially if it is being shared.
What you're stack trace tells me is that its node preemption. This error occurs randomly because some times the node that get preempted is your driver node that causes the app to fail all together.
You can see which nodes are preemptable in your GCP console.
The following could be other possible causes:
The cluster uses preemptive workers (they can be deleted at any time), so their work is not completed and could cause inconsistent behaviors.
There exist resizing in the nodes during the spark job execution that causes to restart tasks/containers/executors.
Memory issues. The shuffle operations are usually done in-memory, but if the memory resources are exceeded, will spill over to disk.
Disk space in the workers is full due to a big amount of shuffle operations, or any other process that uses disk at the workers, for example logs.
Yarn kill tasks to make room for failed attempts.
So, I summarize the following actions as possible workarounds:
1.- Increase memory of the workers and master, this will discard if you face memory problems.
2.- Change image version of Dataproc.
3.- Change cluster properties to tune your cluster especially for mapreduce and spark.
I'm running on 1 node test cluster Cassandra Datastax OpsCenter 5.2.0, installed from Amazon Datastax AMI version 2.6.3 with Cassandra Community 2.2.0-1.
OpsCenter doesn't report any errors (All agents connected) yet on some graphs I see NO DATA (while I know that there been a lot of requests):
On some there is nothing:
some are working just fine, like OS: Load, Storage Capacity and OS: Disk Utilization.
What could be the reason for this? How to fix it?
EDIT:
Opscenter logs seems to be fine. In agent.log file, I've found the following errors (dozens of times):
ERROR [jmx-metrics-2] 2015-09-21 06:50:30,910 Error getting CF metrics
java.lang.UnsupportedOperationException: nth not supported on this type: PersistentArrayMap
at clojure.lang.RT.nthFrom(RT.java:857)
at clojure.lang.RT.nth(RT.java:807)
at opsagent.rollup$process_metric_map.invoke(rollup.clj:252)
at opsagent.metrics.jmx$cf_metric_helper.invoke(jmx.clj:96)
at opsagent.metrics.jmx$start_pool$fn__15320.invoke(jmx.clj:159)
at clojure.lang.AFn.run(AFn.java:24)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR [jmx-metrics-4] 2015-09-21 06:50:38,524 Error getting general metrics
java.lang.UnsupportedOperationException: nth not supported on this type: PersistentHashMap
at clojure.lang.RT.nthFrom(RT.java:857)
at clojure.lang.RT.nth(RT.java:807)
at opsagent.rollup$process_metric_map.invoke(rollup.clj:252)
at opsagent.metrics.jmx$generic_metric_helper.invoke(jmx.clj:73)
at opsagent.metrics.jmx$start_pool$fn__15334$fn__15335.invoke(jmx.clj:171)
at opsagent.metrics.jmx$start_pool$fn__15334.invoke(jmx.clj:170)
at clojure.lang.AFn.run(AFn.java:24)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
BTW. In Datastax agent configuration file (address.yaml), I have only stomp_interface parameter set to my node IP.
NO DATA in this case was caused by bug in Datastax agent. I've included related error in my question. The same error is also mentioned here: Opsagent UnsupportedOperationException with PersistentHashMap
After upgrading from Datastax Opscenter 5.2.0 to 5.2.1 and upgrading agent to the same version, problem went away.
Thank you #phact for our help!
I had a similar error before and it was due to a wrongly configured address.yaml.
In my case the issue was that I only wrote a simple IP address in the hosts setting and it was fixed by using an array syntax instead. It seemed that without that, the agent was using a wrong data type, which then issued an UnsupportedOperationException. Try to add your local interface IP (or whatever IP your cassandra node is listening on on your machine) to the hosts setting like so in address.yaml:
hosts: ["<local-node-ip>"]
Maybe also try adding the following settings to address.yaml, just to be sure that the correct parameters are used by the agent:
rpc_address: <local-node-ip>
agent_rpc_interface: <local-node-ip>
We are trying to save our RDD which will have close to 4 billion rows to Cassandra. While some of the data gets persisted but for some partitions we see these error logs in the spark logs.
We have already set these two properties for cassandra connector. Is there some other optimization that we would need to do? Also what are the recommended settings for reader? We have left them as default.
spark.cassandra.output.batch.size.rows=1
spark.cassandra.output.concurrent.writes=1
We are running spark-1.1.0 and spark-cassandra-connector-java_2.10 v 2.1.0
15/01/08 05:32:44 ERROR QueryExecutor: Failed to execute: com.datastax.driver.core.BoundStatement#3f480b4e
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.87.33.133:9042 (com.datastax.driver.core.exceptions.DriverException: Timed out waiting for server response))
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Thanks
Ankur
I've seen something similar in my four node cluster. It seemed that if I specified EVERY cassandra node name in the spark settings, then it works, however if I only specified the seeds (of the four, two were seeds) than I got the exact same issue. I haven't followed up on it, as specifying all four is getting the job done (but I intend to at some point). I'm using hostnames for seed values and not ips. And using hostnames in spark cassandra settings. I did hear it could be due to some akka dns issues. Maybe try using ip addresses through and through, or specifying all hosts. The latter's been working flawlessly for me.
I realized, I was running the application with spark.cassandra.output.concurrent.writes=2. I changed it to 1 and there were no exceptions. The exceptions were because Spark was producing data at much higher frequency than our Cassandra cluster could write so changing the setting to 1 worked for us.
Thanks!!