Reduce the retry time in CoreData - see below - core-data

This is what appears in the log, see the Retrying after delay: 60 seconds
2013-10-09 15:21:59.807 MyApp[4848:6507] -[PFUbiquitySafeSaveFile waitForFileToUpload:](299): CoreData: Ubiquity: <PFUbiquityPeerReceipt: 0x15e09de0>(0)
permanentLocation: <PFUbiquityLocation: 0x15e22100>: /var/mobile/Library/Mobile Documents/HHWT75NS6T~com~xyz~MyApp/CoreData/New Project retry 2/mobile~A3BF465A-15C7-46F2-8C68-AC8F49FD15AB/New Project retry 2/W8MckEJ0x2d~HZZIeUH2be6hs41TEOONzKIrCuLcuP4=/receipt.0.cdt
safeLocation: <PFUbiquityLocation: 0x15ec4660>: /var/mobile/Library/Mobile Documents/HHWT75NS6T~com~xyz~MyApp/CoreData/New Project retry 2/mobile~A3BF465A-15C7-46F2-8C68-AC8F49FD15AB/New Project retry 2/W8MckEJ0x2d~HZZIeUH2be6hs41TEOONzKIrCuLcuP4=/mobile~A3BF465A-15C7-46F2-8C68-AC8F49FD15AB.0.cdt
currentLocation: <PFUbiquityLocation: 0x15ec4660>: /var/mobile/Library/Mobile Documents/HHWT75NS6T~com~xyz~MyApp/CoreData/New Project retry 2/mobile~A3BF465A-15C7-46F2-8C68-AC8F49FD15AB/New Project retry 2/W8MckEJ0x2d~HZZIeUH2be6hs41TEOONzKIrCuLcuP4=/mobile~A3BF465A-15C7-46F2-8C68-AC8F49FD15AB.0.cdt
kv: (null)
Safe save failed for file, error: Error Domain=NSCocoaErrorDomain Code=260 "The operation couldn’t be completed. (Cocoa error 260 - Request for item properties on non-existent item.)" UserInfo=0x15eaedf0 {NSDescription=Request for item properties on non-existent item.}
2013-10-09 15:21:59.809 iProjectiPad[4848:6507] -[PFUbiquitySetupAssistant finishSetupWithRetry:](811): CoreData: Ubiquity: <PFUbiquitySetupAssistant: 0x15e4e520>: Retrying after delay: 60

Related

Get-WinEvent output is truncated

I want to get the records from the Event file but it keeps truncating the messages.
Get-WinEvent -LogName 'Microsoft-AppV-Client/Admin' -MaxEvents 5
TimeCreated Id LevelDisplayName Message
10/21/2021 2:29:20 PM 19102 Error Getting server publishing data failed....
10/21/2021 2:29:20 PM 19203 Error HttpRequest sendRequest failed....
10/21/2021 2:29:05 PM 19102 Error Getting server publishing data failed....
10/21/2021 2:29:05 PM 19203 Error HttpRequest sendRequest failed....
10/21/2021 2:28:50 PM 19102 Error Getting server publishing data failed....

How could I handle the Err of TLS certificate with fabric-sdk-node

I'm trying to build a web front end of a fabric-network, and after I completed the registerAdmin and registerUser, I got err when I was tring to run my js code.
root#oyu-virtual-machine:~/hyperledger-fabric/test/webapp# node get.js
Load privateKey and signedCert
Get History
Assigning transaction_id: 35e9ed932366df66448d789fbf5989e6ba31be555f96eaca3197475a1602749c
E0429 15:09:28.130483373 4413 ssl_transport_security.cc:1245] Handshake failed with fatal error SSL_ERROR_SSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed.
returned from gethistory
Gethistory result count = 1
error from gethistory = Error: 14 UNAVAILABLE: failed to connect to all addresses
at Object.exports.createStatusError (/root/hyperledger-fabric/test/webapp/node_modules/fabric-client/node_modules/grpc/src/common.js:91:15)
at Object.onReceiveStatus (/root/hyperledger-fabric/test/webapp/node_modules/fabric-client/node_modules/grpc/src/client_interceptors.js:1209:28)
at InterceptingListener._callNext (/root/hyperledger-fabric/test/webapp/node_modules/fabric-client/node_modules/grpc/src/client_interceptors.js:568:42)
at InterceptingListener.onReceiveStatus (/root/hyperledger-fabric/test/webapp/node_modules/fabric-client/node_modules/grpc/src/client_interceptors.js:618:8)
at callback (/root/hyperledger-fabric/test/webapp/node_modules/fabric-client/node_modules/grpc/src/client_interceptors.js:847:24) {
code: 14,
metadata: Metadata { _internal_repr: {}, flags: 0 },
details: 'failed to connect to all addresses'
}
Response is Error: 14 UNAVAILABLE: failed to connect to all addresses
In my opinion, the most important message is
E0429 15:09:28.130483373 4413 ssl_transport_security.cc:1245] Handshake failed with fatal error SSL_ERROR_SSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed.
It's not generated by my code, but system generated it.
I do hope someone could help me. Thanks in advance.

Intermittent connectionTimeout errors in spark streaming job

I have a Spark (2.1) streaming job that writes processed data to azure blob storage every batch (with batch interval 1 min). Every now and then (once every couple of hours) I get the 'java.net.ConnectException' with connection timeout message. This does gets retried and eventually succeeds. But this issue is causing delay in the completion of the 1 min streaming batch and is causing it to finish in 2 to 3 min when this error occurs.
Below is the executor log snippet with error message. I have spark.executor.cores=5.
Is there some kind of number of connections limit that might be causing this?
17/10/11 16:09:02 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Starting operation.}
17/10/11 16:09:02 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Starting operation with location 'PRIMARY' per location mode 'PRIMARY_ONLY'.}
17/10/11 16:09:02 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Starting request to 'http://<name>.blob.core.windows.net/rawData/2017/10/11/16/data1.json' at 'Wed, 11 Oct 2017 16:09:02 GMT'.}
17/10/11 16:09:02 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Waiting for response.}
..
17/10/11 16:11:09 WARN root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Retryable exception thrown. Class = 'java.net.ConnectException', Message = 'Connection timed out (Connection timed out)'.}
17/10/11 16:11:09 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Checking if the operation should be retried. Retry count = '0', HTTP status code = '-1', Error Message = 'An unknown failure occurred : Connection timed out (Connection timed out)'.}
17/10/11 16:11:09 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {The next location has been set to 'PRIMARY', per location mode 'PRIMARY_ONLY'.}
17/10/11 16:11:09 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {The retry policy set the next location to 'PRIMARY' and updated the location mode to 'PRIMARY_ONLY'.}
17/10/11 16:11:09 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Operation will be retried after '0'ms.}
17/10/11 16:11:09 INFO root: {89f867cc-cbd3-4fa9-a549-4e07be3f69b0}: {Retrying failed operation.}

Node rdkafka consumer stops fetching messages

Consumer stops fetching after getting following error
{ Error: Local: Broker transport failure at Error (native) origin: 'local', message: 'broker transport failure', code: -1, errno: -1, stack: 'Error: Local: Broker transport failure\n at Error (native)' }
All kafka servers are healthy even then I am getting the transport failure .
After this error consumers stops fetching further messages

Spark: Frequent Pattern Mining: issues in saving the results

I am using Spark's FP-growth algorithm. I was getting OOM errors when I was doing a collect, I then changed the code so that I can save the results in a text file on HDFS rather than collecting them on the driver node. Here is the related code:
// Model building:
val fpg = new FPGrowth()
.setMinSupport(0.01)
.setNumPartitions(10)
val model = fpg.run(transaction_distinct)
Here is a transformation that should give me RDD[Strings].
val mymodel = model.freqItemsets.map { itemset =>
val model_res = itemset.items.mkString("[", ",", "]") + ", " + itemset.freq
model_res
}
I then save the model results as. Unfortunately, this is really SLOW!!!
mymodel.saveAsTextFile("fpm_model")
I get these errors:
16/02/04 14:47:28 ERROR ErrorMonitor: AssociationError[akka.tcp://sparkDriver#ipaddress:46811] -> [akka.tcp://sparkExecutor#hostname:39720]: Error [Association failed with [akka.tcp://sparkExecutor#hostname:39720]][akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor#hostname:39720]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: hostname/ipaddress:39720] akka.event.Logging$Error$NoCause$
16/02/04 14:47:28 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(3, hostname, 58683)
16/02/04 14:47:28 INFO BlockManagerMaster: Removed 3 successfully in removeExecutor
16/02/04 14:47:28 ERROR ErrorMonitor: AssociationError [akka.tcp://sparkDriver#ipaddress:46811] ->[akka.tcp://sparkExecutor#hostname:39720]: Error [Association failed with [akka.tcp://sparkExecutor#hostname:39720]][akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor#hostname:39720]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: hostname/ipaddress:39720

Resources