ActionBarSherlock + GoogleMaps v2 map not shown (without map log errors) - actionbarsherlock

I just settled my project for supporting Android Maps v2 + ActionBarSherlock, as this answer says:
http://facebook.stackoverflow.com/a/13727539/689723
Ok, I was fighting for not getting the typical XML error, and I could finally get the fragment without errors.
I'm pretty sure my AndroidManifest is ok.
My log is:
01-21 00:43:03.455: D/dalvikvm(4990): GC_CONCURRENT freed 311K, 8% free 13870K/14983K, paused 1ms+3ms, total 20ms
01-21 00:43:03.455: D/dalvikvm(4990): WAIT_FOR_CONCURRENT_GC blocked 7ms
01-21 00:43:03.555: D/dalvikvm(4990): GC_CONCURRENT freed 379K, 7% free 14100K/15047K, paused 12ms+2ms, total 32ms
01-21 00:43:03.555: D/dalvikvm(4990): WAIT_FOR_CONCURRENT_GC blocked 6ms
01-21 00:43:03.555: D/dalvikvm(4990): WAIT_FOR_CONCURRENT_GC blocked 4ms
01-21 00:43:03.555: D/dalvikvm(4990): WAIT_FOR_CONCURRENT_GC blocked 7ms
01-21 00:43:03.555: D/dalvikvm(4990): WAIT_FOR_CONCURRENT_GC blocked 4ms
01-21 00:43:03.555: D/dalvikvm(4990): WAIT_FOR_CONCURRENT_GC blocked 6ms
01-21 00:43:03.555: D/dalvikvm(4990): WAIT_FOR_CONCURRENT_GC blocked 4ms
01-21 00:43:03.555: D/dalvikvm(4990): WAIT_FOR_CONCURRENT_GC blocked 6ms
01-21 00:43:03.555: D/dalvikvm(4990): WAIT_FOR_CONCURRENT_GC blocked 5ms
01-21 00:43:03.555: D/dalvikvm(4990): WAIT_FOR_CONCURRENT_GC blocked 4ms
01-21 00:43:03.560: E/System(4990): Uncaught exception thrown by finalizer
01-21 00:43:03.560: E/System(4990): java.io.IOException: close failed: EIO (I/O error)
01-21 00:43:03.560: E/System(4990): at libcore.io.IoUtils.close(IoUtils.java:41)
01-21 00:43:03.560: E/System(4990): at java.io.RandomAccessFile.close(RandomAccessFile.java:166)
01-21 00:43:03.560: E/System(4990): at java.io.RandomAccessFile.finalize(RandomAccessFile.java:175)
01-21 00:43:03.560: E/System(4990): at java.lang.Daemons$FinalizerDaemon.doFinalize(Daemons.java:186)
01-21 00:43:03.560: E/System(4990): at java.lang.Daemons$FinalizerDaemon.run(Daemons.java:169)
01-21 00:43:03.560: E/System(4990): at java.lang.Thread.run(Thread.java:856)
01-21 00:43:03.560: E/System(4990): Caused by: libcore.io.ErrnoException: close failed: EIO (I/O error)
01-21 00:43:03.560: E/System(4990): at libcore.io.Posix.close(Native Method)
01-21 00:43:03.560: E/System(4990): at libcore.io.BlockGuardOs.close(BlockGuardOs.java:75)
01-21 00:43:03.560: E/System(4990): at libcore.io.IoUtils.close(IoUtils.java:38)
01-21 00:43:03.560: E/System(4990): ... 5 more
01-21 00:43:03.630: D/dalvikvm(4990): GC_CONCURRENT freed 487K, 7% free 14299K/15367K, paused 12ms+3ms, total 29ms
01-21 00:43:03.630: D/dalvikvm(4990): WAIT_FOR_CONCURRENT_GC blocked 9ms
01-21 00:43:03.635: D/AbsListView(4990): Get MotionRecognitionManager
01-21 00:43:03.670: D/SensorManager(4990): unregisterListener:: Listener= android.view.OrientationEventListener$SensorEventListenerImpl#41d803c0
01-21 00:43:03.670: D/Sensors(4990): Remain listener = Sending .. normal delay 200ms
01-21 00:43:03.670: I/Sensors(4990): sendDelay --- 200000000
01-21 00:43:03.670: D/SensorManager(4990): JNI - sendDelay
01-21 00:43:03.670: I/SensorManager(4990): Set normal delay = true
01-21 00:43:03.800: D/dalvikvm(4990): GC_CONCURRENT freed 291K, 6% free 14721K/15623K, paused 13ms+15ms, total 54ms
01-21 00:43:03.875: D/dalvikvm(4990): GC_FOR_ALLOC freed 653K, 10% free 14731K/16199K, paused 26ms, total 27ms
01-21 00:43:03.955: D/dalvikvm(4990): GC_FOR_ALLOC freed 642K, 10% free 14730K/16199K, paused 23ms, total 24ms
01-21 00:43:03.965: W/IInputConnectionWrapper(4990): getSelectedText on inactive InputConnection
01-21 00:43:03.970: W/IInputConnectionWrapper(4990): setComposingText on inactive InputConnection
But I think the I/O error is not related to maps...
This is an screenshot
Anyone knows what's happening here?
Thanks in advance,
César.
STATUS: SOLVED
Api key problem. For more info:
Google Maps Android v2 Authorization failure

Related

How to fix Azure functions "inotify limit" error

I am trying to run a durable entity that I had described originally in this question. While trying to figure out that issue I have started getting the below error after a few runs.
The configured user limit (128) on the number of inotify instances has been reached, or the per-process limit on the number of open file descriptors has been reached.
Below is the full log of the error. Can someone point me to the possible cause and fix?
user:func1$ func start -p 8080
Found Python version 3.6.9 (python3).
Azure Functions Core Tools
Core Tools Version: 4.0.3971 Commit hash: d0775d487c93ebd49e9c1166d5c3c01f3c76eaaf (64-bit)
Function Runtime Version: 4.0.1.16815
Functions:
Entityfn: entityTrigger
KWhCalculator: activityTrigger
Orch: orchestrationTrigger
Starter: eventHubTrigger
For detailed output, run func with --verbose flag.
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/2 POST http://127.0.0.1:44825/AzureFunctionsRpcMessages.FunctionRpc/EventStream application/grpc -
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
Executing endpoint 'gRPC - /AzureFunctionsRpcMessages.FunctionRpc/EventStream'
[2022-01-24T10:58:42.582Z] Worker process started and initialized.
[2022-01-24T10:58:47.753Z] Host lock lease acquired by instance ID '000000000000000000000000A845906C'.
[2022-01-24T10:58:47.833Z] A host error has occurred during startup operation '9803b89f-6cf1-4fe1-a482-075e770a9fea'.
[2022-01-24T10:58:47.833Z] System.IO.FileSystem.Watcher: The configured user limit (128) on the number of inotify instances has been reached, or the per-process limit on the number of open file descriptors has been reached.
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
Executed endpoint 'gRPC - /AzureFunctionsRpcMessages.FunctionRpc/EventStream'
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/2 POST http://127.0.0.1:44825/AzureFunctionsRpcMessages.FunctionRpc/EventStream application/grpc - - 200 - application/grpc 5879.0529ms
info: Grpc.AspNetCore.Server.ServerCallHandler[14]
Error reading message.
System.IO.IOException: The request stream was aborted.
---> Microsoft.AspNetCore.Connections.ConnectionAbortedException: The HTTP/2 connection faulted.
--- End of inner exception stack trace ---
at System.IO.Pipelines.Pipe.GetReadResult(ReadResult& result)
at System.IO.Pipelines.Pipe.GetReadAsyncResult()
at System.IO.Pipelines.Pipe.DefaultPipeReader.GetResult(Int16 token)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http2.Http2MessageBody.ReadAsync(CancellationToken cancellationToken)
at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)
at Grpc.AspNetCore.Server.Internal.PipeExtensions.ReadStreamMessageAsync[T](PipeReader input, HttpContextServerCallContext serverCallContext, Func`2 deserializer, CancellationToken cancellationToken)
info: Microsoft.AspNetCore.Server.Kestrel[32]
Connection id "0HMEV51U1VS7R", Request id "0HMEV51U1VS7R:00000001": the application completed without reading the entire request body.
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/2 POST http://127.0.0.1:44825/AzureFunctionsRpcMessages.FunctionRpc/EventStream application/grpc -
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
Executing endpoint 'gRPC - /AzureFunctionsRpcMessages.FunctionRpc/EventStream'
[2022-01-24T10:58:49.902Z] Worker process started and initialized.
[2022-01-24T10:58:55.521Z] Host lock lease acquired by instance ID '000000000000000000000000A845906C'.
[2022-01-24T10:58:55.578Z] A host error has occurred during startup operation '3b77fea3-afd0-427c-9d79-3175d7f0b815'.
[2022-01-24T10:58:55.578Z] System.IO.FileSystem.Watcher: The configured user limit (128) on the number of inotify instances has been reached, or the per-process limit on the number of open file descriptors has been reached.
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
Executed endpoint 'gRPC - /AzureFunctionsRpcMessages.FunctionRpc/EventStream'
info: Grpc.AspNetCore.Server.ServerCallHandler[14]
Error reading message.
System.IO.IOException: The request stream was aborted.
---> Microsoft.AspNetCore.Connections.ConnectionAbortedException: The HTTP/2 connection faulted.
--- End of inner exception stack trace ---
at System.IO.Pipelines.Pipe.GetReadResult(ReadResult& result)
at System.IO.Pipelines.Pipe.GetReadAsyncResult()
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http2.Http2MessageBody.ReadAsync(CancellationToken cancellationToken)
at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)
at Grpc.AspNetCore.Server.Internal.PipeExtensions.ReadStreamMessageAsync[T](PipeReader input, HttpContextServerCallContext serverCallContext, Func`2 deserializer, CancellationToken cancellationToken)
info: Microsoft.AspNetCore.Hosting.Diagnostics[2]
Request finished HTTP/2 POST http://127.0.0.1:44825/AzureFunctionsRpcMessages.FunctionRpc/EventStream application/grpc - - 200 - application/grpc 6153.6504ms
info: Microsoft.AspNetCore.Server.Kestrel[32]
Connection id "0HMEV51U1VS7S", Request id "0HMEV51U1VS7S:00000001": the application completed without reading the entire request body.
info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
Request starting HTTP/2 POST http://127.0.0.1:44825/AzureFunctionsRpcMessages.FunctionRpc/EventStream application/grpc -
info: Microsoft.AspNetCore.Routing.EndpointMiddleware[0]
Executing endpoint 'gRPC - /AzureFunctionsRpcMessages.FunctionRpc/EventStream'
Restarting VS Code and creating a new Azure Function project did not help.
Fix: I restarted my system and the error is gone.
Cause: Not sure, but looks like long time running locally with long JSON messages might have opened lot of listeners on the port

Driver stops executors without a reason

I have an application based on spark structured streaming 3 with kafka, which is processing some user logs and after some time the driver is starting to kill the executors and I don't understand why.
The executors doesn't contain any errors. I'm leaving bellow the logs from executor and driver
On the executor 1:
0/08/31 10:01:31 INFO executor.Executor: Finished task 5.0 in stage 791.0 (TID 46411). 1759 bytes result sent to driver
20/08/31 10:01:33 INFO executor.YarnCoarseGrainedExecutorBackend: Driver commanded a shutdown
On the executor 2:
20/08/31 10:14:33 INFO executor.YarnCoarseGrainedExecutorBackend: Driver commanded a shutdown
20/08/31 10:14:34 INFO memory.MemoryStore: MemoryStore cleared
20/08/31 10:14:34 INFO storage.BlockManager: BlockManager stopped
20/08/31 10:14:34 INFO util.ShutdownHookManager: Shutdown hook called
On the driver:
20/08/31 10:01:33 ERROR cluster.YarnScheduler: Lost executor 3 on xxx.xxx.xxx.xxx: Executor heartbeat timed out after 130392 ms
20/08/31 10:53:33 ERROR cluster.YarnScheduler: Lost executor 2 on xxx.xxx.xxx.xxx: Executor heartbeat timed out after 125773 ms
20/08/31 10:53:33 ERROR cluster.YarnScheduler: Ignoring update with state FINISHED for TID 129308 because its task set is gone (this is likely the result of receiving duplicate task finished status updates) or its executor has been marked as failed.
20/08/31 10:53:33 ERROR cluster.YarnScheduler: Ignoring update with state FINISHED for TID 129314 because its task set is gone (this is likely the result of receiving duplicate task finished status updates) or its executor has been marked as failed.
20/08/31 10:53:33 ERROR cluster.YarnScheduler: Ignoring update with state FINISHED for TID 129311 because its task set is gone (this is likely the result of receiving duplicate task finished status updates) or its executor has been marked as failed.
20/08/31 10:53:33 ERROR cluster.YarnScheduler: Ignoring update with state FINISHED for TID 129305 because its task set is gone (this is likely the result of receiving duplicate task finished status updates) or its executor has been marked as failed.
Is there anyone which had the same problem and solved it?
Looking at the available information at hand:
no errors
Driver commanded a shutdown
Yarn logs showing "state FINISHED"
this seems to be expected behavior.
This typically happens if you forget to await the termination of the spark streaming query. If you do not conclude your code with
query.awaitTermination()
your streaming application will just shutdown after all data was processed.

"ActiveMQ Broker[localhost] Scheduler" java.lang.OutOfMemoryError: GC overhead limit exceeded && Exception in thread "ActiveMQ Transport Server: "

Getting below exception in application startup log, after 5 to 1 week gap.
Exception in thread "ActiveMQ Broker[localhost] Scheduler" java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.activemq.command.ActiveMQDestination.getQualifiedName(ActiveMQDestination.java:232)
at org.apache.activemq.broker.region.Queue.expireMessages(Queue.java:928)
at org.apache.activemq.broker.region.Queue.access$100(Queue.java:106)
at org.apache.activemq.broker.region.Queue$2.run(Queue.java:149)
at org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
Exception in thread "ActiveMQ Transport Server: ssl://<ip>:<port>?socket.needClientAuth=true" java.lang.OutOfMemoryError: GC overhead limit exceeded
at sun.security.ssl.InputRecord.<init>(InputRecord.java:93)
at sun.security.ssl.AppInputStream.<init>(AppInputStream.java:50)
at sun.security.ssl.SSLSocketImpl.init(SSLSocketImpl.java:640)
at sun.security.ssl.SSLSocketImpl.<init>(SSLSocketImpl.java:524)
at sun.security.ssl.SSLServerSocketImpl.accept(SSLServerSocketImpl.java:343)
at org.apache.activemq.transport.tcp.TcpTransportServer.doRunWithServerSocket(TcpTransportServer.java:403)
at org.apache.activemq.transport.tcp.TcpTransportServer.run(TcpTransportServer.java:325)
at java.lang.Thread.run(Thread.java:748)
Exception in thread "pool-3-thread-123443" Exception in thread "ActiveMQ Broker[localhost] Scheduler" Exception in thread "pool-3-thread-123443" java.lang.OutOfMemoryError: GC overhead limit exceeded
and application got killed after writing the GC statement.
Can anyone please help me on these issue to get more understanding on solution perspective.
Note: Not changed the default system usage memory comes with activemq. This entire application is to consume messages from activemq queue ssl://:?socket.needClientAuth=true. should i need to modify the broker URL to add anything? please help

Nodetool repair erroring on cassandra 3.9 cluster complaining of dead nodes

I have a cassandra 3.9 cluster. I initiated a repair from one of the nodes in the cluster. The repair went nowhere. I see the logs on that initiated node filled with errors like this.
ERROR [GossipTasks:1] 2018-02-16 23:27:36,949 RepairSession.java:347 - [repair #cadf6f11-1342-11e8-8d73-6767c6890f70] session completed with the following error
java.io.IOException: Endpoint /**.**.**.52 died
at org.apache.cassandra.repair.RepairSession.convict(RepairSession.java:346) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.gms.FailureDetector.interpret(FailureDetector.java:306) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.gms.Gossiper.doStatusCheck(Gossiper.java:782) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.gms.Gossiper.access$800(Gossiper.java:66) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:181) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118) [apache-cassandra-3.9.jar:3.9]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_91]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_91]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_91]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
One the other hand if I look at the logs for the nodes claimed to be dead, I see one of 3 symptoms.
Either the node claims to have successfully sent the requested
merkle tree over.
The node does not have any trace of the repair session, and thus doesn't appear to have received any repair request.
The node shows exception like this.
ERROR [ValidationExecutor:3] 2018-02-16 23:29:06,548 Validator.java:261 - Failed creating a merkle tree for [repair #cac2bf50-1342-11e8-8d73-6767c6890f70 on somekeyspace/sometable, [(-3531087107126953137,-3495591103116433105], (1424707151780052485,1425479237398192865], (-3533012126945497873,-3531087107126953137], (1425479237398192865,1429220273719165251], (-4991682772598302168,-4984938905452900436], (-7686750611814623539,-7685228552629222537], (7554301216433235881,7559623046999138658], (334796420453180909,342318143371667659], (-3538876023288368831,-3533012126945497873], (1409514567521922418,1424707151780052485], (5391546013321073004,5393284101537339558], (590921410556013711,593440512568877190]]], /..**.43 (see log for details)
ERROR [ValidationExecutor:3] 2018-02-16 23:29:06,549 CassandraDaemon.java:226 - Exception in thread Thread[ValidationExecutor:3,1,main]
java.lang.RuntimeException: Parent repair session with id = c8bf7540-1342-11e8-8d73-6767c6890f70 has failed.
at org.apache.cassandra.service.ActiveRepairService.getParentRepairSession(ActiveRepairService.java:377) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.compaction.CompactionManager.getSSTablesToValidate(CompactionManager.java:1313) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1222) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:81) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.compaction.CompactionManager$11.call(CompactionManager.java:844) ~[apache-cassandra-3.9.jar:3.9]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
Is this a known issue?

Dataproc Spark Streaming Kafka checkpointing warning on Google Cloud Storage

I've got a lot of warning when using Dataproc 1.1 (Spark 2.0.2) with Kafka checkpointing on Google Cloud Storage. I've got the following warn :
16/12/11 01:36:02 WARN HttpTransport: exception thrown while executing request
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
at com.google.api.client.http.javanet.NetHttpResponse.<init>(NetHttpResponse.java:37)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:94)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.listStorageObjectsAndPrefixes(GoogleCloudStorageImpl.java:1069)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.listObjectNames(GoogleCloudStorageImpl.java:1173)
at com.google.cloud.hadoop.gcsio.ForwardingGoogleCloudStorage.listObjectNames(ForwardingGoogleCloudStorage.java:182)
at com.google.cloud.hadoop.gcsio.CacheSupplementedGoogleCloudStorage.listObjectNames(CacheSupplementedGoogleCloudStorage.java:381)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getInferredItemInfo(GoogleCloudStorageFileSystem.java:1286)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getInferredItemInfos(GoogleCloudStorageFileSystem.java:1311)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfos(GoogleCloudStorageFileSystem.java:1212)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.rename(GoogleCloudStorageFileSystem.java:640)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.rename(GoogleHadoopFileSystemBase.java:1091)
at org.apache.spark.streaming.CheckpointWriter$CheckpointWriteHandler.run(Checkpoint.scala:241)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
This goes on several times and eventually just block our spark streaming job on a task that goes on. I've got other warning too before :
16/12/10 18:05:23 WARN ReceivedBlockTracker: Exception thrown while writing record: BatchCleanupEvent(ArrayBuffer()) to the WriteAheadLog.
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:194)
at org.apache.spark.streaming.util.BatchedWriteAheadLog.write(BatchedWriteAheadLog.scala:83)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.writeToLog(ReceivedBlockTracker.scala:234)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.cleanupOldBatches(ReceivedBlockTracker.scala:171)
at org.apache.spark.streaming.scheduler.ReceiverTracker.cleanupOldBlocksAndBatches(ReceiverTracker.scala:226)
at org.apache.spark.streaming.scheduler.JobGenerator.clearCheckpointData(JobGenerator.scala:287)
at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:187)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [5000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:190)
... 9 more
16/12/10 18:05:23 WARN ReceivedBlockTracker: Failed to acknowledge batch clean up in the Write Ahead Log.
Does anyone have the same issues ?
Regards,
I faced similar errors in checkpointing to google storage recently. I started checkpointing to hdfs in dataproc rather than google storage as a temporary workaround.

Resources