Product Model Display Key Null Pointer - guidewire

When attempting to connect to either local or SQL DB I get the below error upon starting my server. Because it is calling at OOTB class I havent been able to debug.
lnar-5cg84268sc 2020-01-21 15:07:38,381 ERROR Server.RunLevel ***** PolicyCenter unable to start *****
java.lang.NullPointerException
at gw.api.productmodel.ProductModelDisplayKey.getPath(ProductModelDisplayKey.java:41)
at com.guidewire.pc.api.productmodel.ProductModelObjectBase.verifyDisplayKeyNotEmpty(ProductModelObjectBase.java:647)
at com.guidewire.pc.api.productmodel.ProductModelObjectBase.verifyFields(ProductModelObjectBase.java:587)
at com.guidewire.pc.api.productmodel.AuditSchedulePatternInternal.verifyFields(AuditSchedulePatternInternal.java:187)
at com.guidewire.pc.api.productmodel.ProductModelObjectBase.verify(ProductModelObjectBase.java:523)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.verifyProductModel(ProductModelImpl.java:1685)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.verifyProductModel(ProductModelImpl.java:1640)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.verifyProductModelIfNeeded(ProductModelImpl.java:336)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.lambda$activateVerifyAndLockPatternsIfNeeded$0(ProductModelImpl.java:322)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl$$Lambda$325/406648867.accept(Unknown Source)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.lambda$runWithinTransaction$4(ProductModelImpl.java:2099)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl$$Lambda$326/1957698296.run(Unknown Source)
at com.guidewire.pl.system.transaction.BootstrapTransaction.run(BootstrapTransaction.java:44)
at com.guidewire.pl.system.transaction.TransactionManagerImpl.execute(TransactionManagerImpl.java:109)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.runWithinTransaction(ProductModelImpl.java:2098)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.activateVerifyAndLockPatternsIfNeeded(ProductModelImpl.java:316)
at com.guidewire.pc.domain.productmodel.impl.ProductModelImpl.start(ProductModelImpl.java:237)
at com.guidewire.pl.system.server.InitTab.startDependency(InitTab.java:465)
at com.guidewire.pc.system.server.PCInitTab.applicationEnterNoDaemons(PCInitTab.java:58)
at com.guidewire.pl.system.server.InitTab.enterNoDaemons(InitTab.java:875)
at com.guidewire.pl.system.server.InitTab.increaseRunLevelTo(InitTab.java:650)
at com.guidewire.pl.system.server.InitTab.setRunLevel(InitTab.java:380)
at com.guidewire.pl.system.servlet.GuidewireStartupServlet.init(GuidewireStartupServlet.java:88)
at javax.servlet.GenericServlet.init(GenericServlet.java:244)
at org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:540)
at org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:349)
at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:812)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:288)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1322)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:732)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:490)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:118)
at org.eclipse.jetty.server.Server.start(Server.java:342)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:100)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:60)
at org.eclipse.jetty.server.Server.doStart(Server.java:290)
at com.guidewire.commons.jetty.GWServerJettyServerMain$JettyServer.doStart(GWServerJettyServerMain.java:83)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1250)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:1174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.eclipse.jetty.start.Main.invokeMain(Main.java:509)
at org.eclipse.jetty.start.Main.start(Main.java:651)
at org.eclipse.jetty.start.Main.main(Main.java:99)
at com.guidewire.commons.jetty.GWServerJettyServerMain.main(GWServerJettyServerMain.java:69)

This error might be you are deleted existing OOTB Display key, go to Local History and revert all changes try to restart the PC or see the same changes git and revert back all changes try to start the machine.

Related

CICS Error Unable to create connection mainframe system

One of my applications is integrated with the mainframe system. Through CICS / CTG. I am facing an error while executing a request.also, i have used ASN1 encoding for request
The error I am getting while executing the request
com.ibm.connector2.cics.CICSUserInputException: CTG9627E IOException occurred when writing to the Output Record
org.springframework.dao.NonTransientDataAccessResourceException: Unable to create a connection to the remote application; nested exception is com.ibm.connector2.cics.CICSUserInputException:
CTG9627E IOException occurred when writing to the Output Record
com.ibm.connector2.cics.CICSUserInputException: CTG9627E IOException occurred when writing to the Output Record
at com.ibm.connector2.cics.ECIManagedConnection.call(Unknown Source)
at com.ibm.connector2.cics.ECIConnection.call(Unknown Source)
at com.ibm.connector2.cics.ECIInteraction.execute(Unknown Source)
java.io.IOException: messagelength in header greater than existing data length - common area too short?
at com.ibm.connector2.cics.ECIManagedConnection.call(Unknown Source)
at com.ibm.connector2.cics.ECIConnection.call(Unknown Source)
at com.ibm.connector2.cics.ECIInteraction.execute(Unknown Source)
i am using
cics version : c900-20160704-0205
Does anyone have any insights about this?
Error description is available at https://www.ibm.com/docs/en/cics-tg-multi/9.0?topic=SSZHFX_9.0.0/cclaj/CTG9627E.htm
It seems like the data you are passing is not a isntanceof javax.resource.cci.Streamable. Could you verify that.
Solved the issue with the below resolution
messagelength in header greater than existing data length - common area too short? as per this error message length is short so I have tried to increase length in a common area as per this documentation https://www.ibm.com/docs/en/cics-ts/5.6?topic=applications-transferring-data-between-programs-using-channels
added code in CTG Service executor >> CTG Record
setCommonAreaLength(32500)
After applying this resolution issue is resolved
Hope some one helps this ans

write dataframe to cassandra facing BusyPoolException

I am trying to write dataframe to cassandra using these line of code,was able to write to table for someday but suddenly the error came
alertdf
.write.format("org.apache.spark.sql.cassandra")
.options(Map("keyspace" -> "dummy", "table" -> "dummytable"))
.mode(SaveMode.Append)
.save()
I get this below error,not able to find out what is getting wrong
ERROR QueryExecutor: Failed to execute: com.datastax.spark.connector.writer.RichBoundStatement#7dba59e2
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: **.**.**.**/**.**.**.**:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [**.**.**.**/**.**.**.**] Pool is busy (no available connection and the queue has reached its max size 256)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)
at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:46)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:275)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onFailure(RequestHandler.java:338)
at shade.com.datastax.spark.connector.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
at shade.com.datastax.spark.connector.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
at shade.com.datastax.spark.connector.google.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:106)
at shade.com.datastax.spark.connector.google.common.util.concurrent.Futures.addCallback(Futures.java:1322)
at shade.com.datastax.spark.connector.google.common.util.concurrent.Futures.addCallback(Futures.java:1258)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.query(RequestHandler.java:297)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:272)
at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:95)
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:132)
at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:40)
at com.sun.proxy.$Proxy14.executeAsync(Unknown Source)
at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:40)
at com.sun.proxy.$Proxy15.executeAsync(Unknown Source)
at com.datastax.spark.connector.writer.QueryExecutor$$anonfun$$lessinit$greater$1.apply(QueryExecutor.scala:11)
at com.datastax.spark.connector.writer.QueryExecutor$$anonfun$$lessinit$greater$1.apply(QueryExecutor.scala:11)
at com.datastax.spark.connector.writer.AsyncExecutor.executeAsync(AsyncExecutor.scala:31)
at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$1$$anonfun$apply$2.apply(TableWriter.scala:199)
at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$1$$anonfun$apply$2.apply(TableWriter.scala:198)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31)
at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$1.apply(TableWriter.scala:198)
at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$1.apply(TableWriter.scala:175)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:112)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:111)
at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:145)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:111)
at com.datastax.spark.connector.writer.TableWriter.writeInternal(TableWriter.scala:175)
at com.datastax.spark.connector.writer.TableWriter.insert(TableWriter.scala:162)
at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:149)
at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
can anyone help me with this issue?
It looks like that your servers are overloaded, and don't process your requests on time. I recommend to try to tune write-related configuration parameters, like, output.concurrent.writes, output.throughput_mb_per_sec and other, but I would start with first 2.

Unable to join race on local server

I installed the code rally server following th guide on IBM.
It runs, I can access the server information page and see the leaderboard.
But when I try to enter with Eclipse I get a
"Unable to enter Keepertje on Localhost"
I also try to connect with the nodeclient found on Github, but there I also cannot authenticate. Am I missing something?
Kind regards,.
Cindy
My Server.xml:
<!-- Enable features -->
<featureManager>
<feature>webProfile-7.0</feature>
<feature>localConnector-1.0</feature>
<feature>websocket-1.1</feature>
</featureManager>
<!-- To access this server from a remote client add a host attribute to the following element, e.g. host="*" -->
<httpEndpoint httpPort="9080" httpsPort="9443" id="defaultHttpEndpoint"/>
<!-- Automatically expand WAR files and EAR files -->
<applicationManager autoExpand="true"/>
<applicationMonitor updateTrigger="mbean"/>
<webApplication id="CodeRallyWeb" location="CodeRallyWeb.war" name="CodeRallyWeb"/>
Error:
------Start of DE processing------ = [9-2-17 15:17:13:214 CET]
Exception = javax.servlet.ServletException
Source = com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters
probeid = 1064
Stack Dump = javax.servlet.ServletException: java.lang.IllegalArgumentException: There is no value matching -1 id
at com.ibm.coderally.web.service.DatabaseServletUbi.doPost(DatabaseServletUbi.java:64)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1290)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:778)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:475)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1157)
at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:4956)
at com.ibm.ws.webcontainer31.osgi.webapp.WebApp31.handleRequest(WebApp31.java:525)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.handleRequest(DynamicVirtualHost.java:315)
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:1014)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:280)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:967)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:359)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:318)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:471)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:405)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:285)
at com.ibm.ws.http.channel.internal.inbound.HttpICLReadCallback.complete(HttpICLReadCallback.java:66)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:504)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:574)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:929)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1018)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.IllegalArgumentException: There is no value matching -1 id
at com.ibm.coderally.api.ai.CheckpointAI.getById(CheckpointAI.java:109)
at com.ibm.coderally.web.service.SubmitVehicle.buildIntermediateRaceCar(SubmitVehicle.java:421)
at com.ibm.coderally.web.service.SubmitVehicle.doPost(SubmitVehicle.java:307)
at com.ibm.coderally.web.service.DatabaseServletUbi.doPost(DatabaseServletUbi.java:61)
... 25 more
Dump of callerThis
null
Make Vehicle
All Vehicles in the corner
Server.json
{"servers":[{"alias":"IBM Cloud","host":"http://www.coderallycloud.com","username":"someone","oauthType":null,"logoutURL":null,"port":80,"userId":77},{"alias":"NA Contest Server","host":"http://challenge-na.coderallycloud.com","username":"","oauthType":null,"logoutURL":null,"port":80,"userId":-1},{"alias":"EU Contest Server","host":"http://challenge-eu.coderallycloud.com","username":"","oauthType":null,"logoutURL":null,"port":80,"userId":-1},{"alias":"Brazil Contest Server","host":"http://challenge-br.coderallycloud.com","username":"","oauthType":null,"logoutURL":null,"port":80,"userId":-1},{"alias":"India Contest Server","host":"http://challenge-in.coderallycloud.com","username":"","oauthType":null,"logoutURL":null,"port":80,"userId":-1},{"alias":"China Contest Server","host":"http://challenge-cn.coderallycloud.com","username":"","oauthType":null,"logoutURL":null,"port":80,"userId":-1},{"alias":"MyOwnServer","host":"http://localhost","username":"Keepertje","oauthType":null,"logoutURL":null,"port":9080,"userId":1}]}
OK so you have not yet logged into the server according to the error message. To do so please click on the servers browser in Eclipse (the grey square icon next to the green + for creating new AIs). Once you have done that select your local server and login using the login button to the right - if the username does not exist it will create it and log you in. Once logged in you can request a new race and it should work (the user ID of -1 error message is displayed if you're not logged in - I'll have a look at getting that text changed for this circumstance to make it clearer).

Amazon s3a returns 400 Bad Request with Spark-redshift library

I am facing java.io.IOException: s3n://bucket-name : 400 : Bad Request error while loading Redshift data through spark-redshift library:
The Redshift cluster and the s3 bucket both are in mumbai region.
Here is the full error stack:
2017-01-13 13:14:22 WARN TaskSetManager:66 - Lost task 0.0 in stage 0.0 (TID 0, master): java.io.IOException: s3n://bucket-name : 400 : Bad Request
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:453)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:427)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:181)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at org.apache.hadoop.fs.s3native.$Proxy10.retrieveMetadata(Unknown Source)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:476)
at com.databricks.spark.redshift.RedshiftRecordReader.initialize(RedshiftInputFormat.scala:115)
at com.databricks.spark.redshift.RedshiftFileFormat$$anonfun$buildReader$1.apply(RedshiftFileFormat.scala:92)
at com.databricks.spark.redshift.RedshiftFileFormat$$anonfun$buildReader$1.apply(RedshiftFileFormat.scala:80)
at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(fileSourceInterfaces.scala:279)
at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(fileSourceInterfaces.scala:263)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:116)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.jets3t.service.impl.rest.HttpException: 400 Bad Request
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:425)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:279)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestHead(RestStorageService.java:1052)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2264)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectDetailsImpl(RestStorageService.java:2193)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:1120)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:575)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:174)
... 30 more
And here is my java code for the same:
SparkContext sparkContext = SparkSession.builder().appName("CreditModeling").getOrCreate().sparkContext();
sparkContext.hadoopConfiguration().set("fs.s3a.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem");
sparkContext.hadoopConfiguration().set("fs.s3a.awsAccessKeyId", fs_s3a_awsAccessKeyId);
sparkContext.hadoopConfiguration().set("fs.s3a.awsSecretAccessKey", fs_s3a_awsSecretAccessKey);
sparkContext.hadoopConfiguration().set("fs.s3a.endpoint", "s3.ap-south-1.amazonaws.com");
SQLContext sqlContext=new SQLContext(sparkContext);
Dataset dataset= sqlContext
.read()
.format("com.databricks.spark.redshift")
.option("url", redshiftUrl)
.option("query", query)
.option("aws_iam_role", aws_iam_role)
.option("tempdir", "s3a://bucket-name/temp-dir")
.load();
I was able to solve the problem on spark local mode by doing following changes (referred this):
1) I have replaced the jets3t jar to 0.9.4
2) Changed jets3t configuration properties to support the aws4 version bucket as follows:
Jets3tProperties myProperties = Jets3tProperties.getInstance(Constants.JETS3T_PROPERTIES_FILENAME);
myProperties.setProperty("s3service.s3-endpoint", "s3.ap-south-1.amazonaws.com");
myProperties.setProperty("storage-service.request-signature-version", "AWS4-HMAC-SHA256");
myProperties.setProperty("uploads.stream-retry-buffer-size", "2147483646");
But now i am trying to run the job in a clustered mode (spark standalone mode or with a resource manager MESOS) and the error appears again :(
Any help would be appreciated!
Actual Problem:
Updating Jets3tProperties, to support AWS s3 signature version 4, at runtime worked on local mode but not on cluster mode because the properties were only getting updated on the driver JVM but not on any of the executor JVM's.
Solution:
I found a workaround to update the Jets3tProperties on all executors by referring to this link.
By referring to the above link I have put an additional code snippet, to update the Jets3tProperties, inside .foreachPartition() function which will run it for the first partition created on any of the executors.
Here is the code:
Dataset dataset= sqlContext
.read()
.format("com.databricks.spark.redshift")
.option("url", redshiftUrl)
.option("query", query)
.option("aws_iam_role", aws_iam_role)
.option("tempdir", "s3a://bucket-name/temp-dir")
.load();
dataset.foreachPartition(rdd -> {
boolean first=true;
if(first){
Jets3tProperties myProperties =
Jets3tProperties.getInstance(Constants.JETS3T_PROPERTIES_FILENAME);
myProperties.setProperty("s3service.s3-endpoint", "s3.ap-south-1.amazonaws.com");
myProperties
.setProperty("storage-service.request-signature-version", "AWS4-HMAC-SHA256");
myProperties.setProperty("uploads.stream-retry-buffer-size", "2147483646");
first = false;
}
});
that stack implies that you're using the older s3n connector, based on jets3t. you are setting permissions which only work with S3a, the newer one. Use a URL like s3a:// to pick up the new entry.
Given you are trying to use V4 API, you'll need to set the fs.s3a.endpoint too. The 400/bad-request response is one you'd see if you tried to auth with v4 against the central endpointd

FSReadError in Cassandra

I have inserted massively data into 2 nodes cassandra server. After 2 days I've found that the server went down with this error, And I can't guess the problem
FSReadError in /var/lib/cassandra/data/system/hints/system-hints-jb-1090-Data.db
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:95)
at org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:280)
at org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:41)
at org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1163)
at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:362)
at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.fetchMoreData(IndexedSliceReader.java:332)
at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:145)
at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:45)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
at org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
at org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87)
at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:294)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1468)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1294)
at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:346)
at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:304)
at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:92)
at org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:525)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(Unknown Source)
at sun.nio.ch.FileChannelImpl.position(Unknown Source)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:101)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:87)
... 29 more
Thanks for the anwser
My hunch: you have a bad disk or your disk space ran out. You could confirm by running some disk check tools on your nodes?

Resources