SolrException: Internal Server Error - search

I am working on Solr in my application. I am using apache-solr-solrj-1.4.0.jar.
When I try to call add(SolrInputDocument doc) from CommonsHttpSolrServer, I am getting the following exception:
org.apache.solr.common.SolrException: Internal Server Error
Internal Server Error
at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:424)
at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:243)
at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:64)
Can anyone please help me to resolve this problem?
The following are the attributes in solrconfig.xml:
<lockType>native</lockType>
<unlockOnStartup>false</unlockOnStartup>
<reopenReaders>true</reopenReaders>
I am getting the following exception in the solr server logs:
24 May, 2010 2:51:22 AM org.apache.solr.common.SolrException log
SEVERE: java.lang.NullPointerException
at org.apache.solr.handler.ReplicationHandler$4.postCommit(ReplicationHandler.java:922)
at org.apache.solr.update.UpdateHandler.callPostCommitCallbacks(UpdateHandler.java:78)
at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:411)
at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
at org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:107)
at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:48)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.ha.session.JvmRouteBinderValve.invoke(JvmRouteBinderValve.java:210)
at org.apache.catalina.ha.tcp.ReplicationValve.invoke(ReplicationValve.java:347)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190)
at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:291)
at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:769)
at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:698)
at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:891)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690)
at java.lang.Thread.run(Thread.java:619)
INFO: {} 0 1039
24 May, 2010 2:52:29 AM org.apache.solr.common.SolrException log
SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#./solr/data/index/lucene-be18de26b941317e71dc59f9e5ba63c4-write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:85)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1545)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:1402)
at org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:190)
at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:98)
at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:173)
at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:220)
at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61)
at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:139)
at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:69)
at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.ha.session.JvmRouteBinderValve.invoke(JvmRouteBinderValve.java:210)
at org.apache.catalina.ha.tcp.ReplicationValve.invoke(ReplicationValve.java:347)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190)
at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:291)
at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:769)
at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:698)
at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:891)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690)
at java.lang.Thread.run(Thread.java:619)

I have set following in my solrconfig.xml and it works.
<lockType>simple</lockType>
<unlockOnStartup>true</unlockOnStartup>
Also, set following to avoid write lock exceptions on index directory:
<maxFieldLength>10000</maxFieldLength>
<writeLockTimeout>60000</writeLockTimeout>
<commitLockTimeout>60000</commitLockTimeout>

I am very unsure, but in this thread
http://www.mail-archive.com/solr-user#lucene.apache.org/msg08048.html
they recommend to use
<unlockOnStartup>true</unlockOnStartup>
and
<lockType>simple</lockType>
I think this should be safe as long as you access the index through solr or solrj (not though lucene!).
Any other ideas?

SolrJ client does not give you the actual error. Try looking at the solr server logs which should be located under tomcat or jetty (or whatever runs solr).

Sounds like a corrupt index or busy lock file.. I had something similar and restarting worked, oddly enough.

It comes from failing to remove write.lock file after some update actions. Removing the write.lock in the core's data/index folder will solve this problem temporarily and regain updating action. I know using post.jar to update has more bad luck to cause this problem, whereas url with stream.body rarely cause this problem. Karussel's answer did improves the situation but seems not solve it at all. I doubt it comes from some design issue of Solr. Hope Solr 4 has solved this problem. Also one can refer to the answer in this question: how-to-solve-the-lock-obtain-timed-out-when-using-solr-plainly

Related

Large field values are not visible in Kibana dashboard

I am trying to display ErrorDetails field information over Kibana dashboards. My data seems indexed since it is available under discover tab, but when I try to use same field in Visualization or it doesn't give any result over there.
Results on Discover tab :
#timestamp
Mar 17, 2021 # 17:45:46.857
ErrorDetails
Servlet.service() for servlet jsp threw exception
java.lang.IllegalStateException: getOutputStream() has already been called for this response
at org.apache.catalina.connector.Response.getWriter(Response.java:638)
at org.apache.catalina.connector.ResponseFacade.getWriter(ResponseFacade.java:214)
at javax.servlet.ServletResponseWrapper.getWriter(ServletResponseWrapper.java:105)
at org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:125)
at org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:118)
at org.apache.jasper.runtime.PageContextImpl.release(PageContextImpl.java:182)
at org.apache.jasper.runtime.JspFactoryImpl.internalReleasePageContext(JspFactoryImpl.java:126)
at org.apache.jasper.runtime.JspFactoryImpl.releasePageContext(JspFactoryImpl.java:80)
at org.apache.jsp.common.emxNavigatorErrorPage_jsp._jspService(emxNavigatorErrorPage_jsp.java:237)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:728)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:390)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:334)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:728)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:749)
at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:605)
at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:544)
at org.apache.catalina.core.StandardHostValve.custom(StandardHostValve.java:461)
at org.apache.catalina.core.StandardHostValve.throwable(StandardHostValve.java:412)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:201)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1041)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:603)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ErrorType
SEVERE
Time
Feb 22, 2021 9:04:35 PM
_id
uRkdQHgB5IxHfqi3CEya
_index
localhost_logs
I tried to update the ignore_above setting also for same index and field, but that also is not working.
I understand this is weird to display whole error message but I don't have any pattern to filter it more and want to display complete information in one data table column. Any suggestion please.

Datastax 5.0.5 INFO [PO-thread-0] DbInfoRollupPlugin.java - Error retrieving node level db summary

I'm running DSE 5.0.5 on 2 identical clusters, all nodes being Spark+SOLR. On the first everything is ok, however on the second I got this message in /var/lib/cassandra/system.log:
INFO [PO-thread-0] 2017-04-02 19:26:43,176 DbInfoRollupPlugin.java:196 - Error retrieving node level db summary
It is reported as "INFO" however something is wrong and I can't figure it out. Partial stack trace follows:
INFO [PO-thread-0] 2017-04-02 19:26:43,176 DbInfoRollupPlugin.java:196 - Error retrieving node level db summary
java.util.concurrent.TimeoutException: null
at java.util.concurrent.FutureTask.get(FutureTask.java:205) [na:1.8.0_112]
at com.datastax.bdp.plugin.DeferringScheduler$DeferringTask.get(DeferringScheduler.java:115) ~[dse-core-5.0.5.jar:5.0.5]
at com.datastax.bdp.reporting.snapshots.db.DbInfoRollupPlugin$DbInfoRollupTask.doRollup(DbInfoRollupPlugin.java:192) [dse-core-5.0.5.jar:5.0.5]
at com.datastax.bdp.reporting.snapshots.db.DbInfoRollupPlugin$DbInfoRollupTask.run(DbInfoRollupPlugin.java:173) [dse-core-5.0.5.jar:5.0.5]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_112]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
Could you please indicate what to check to correct this issue.
Many Thanks
I figured out that is this property:
dse.db_info_rollup_node_query_timeout that has a default of 3000 ms.
However I don't know where to set it...
Pls advice,
Thx.,
Cristian
Please set performance_core_threads: 2 in dse.yaml. Note that this setting is probably missing in default dse.yaml so you will have to add it. Don't confuse it with
Although you receive that timeout exception, it should work.

Unable to join race on local server

I installed the code rally server following th guide on IBM.
It runs, I can access the server information page and see the leaderboard.
But when I try to enter with Eclipse I get a
"Unable to enter Keepertje on Localhost"
I also try to connect with the nodeclient found on Github, but there I also cannot authenticate. Am I missing something?
Kind regards,.
Cindy
My Server.xml:
<!-- Enable features -->
<featureManager>
<feature>webProfile-7.0</feature>
<feature>localConnector-1.0</feature>
<feature>websocket-1.1</feature>
</featureManager>
<!-- To access this server from a remote client add a host attribute to the following element, e.g. host="*" -->
<httpEndpoint httpPort="9080" httpsPort="9443" id="defaultHttpEndpoint"/>
<!-- Automatically expand WAR files and EAR files -->
<applicationManager autoExpand="true"/>
<applicationMonitor updateTrigger="mbean"/>
<webApplication id="CodeRallyWeb" location="CodeRallyWeb.war" name="CodeRallyWeb"/>
Error:
------Start of DE processing------ = [9-2-17 15:17:13:214 CET]
Exception = javax.servlet.ServletException
Source = com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters
probeid = 1064
Stack Dump = javax.servlet.ServletException: java.lang.IllegalArgumentException: There is no value matching -1 id
at com.ibm.coderally.web.service.DatabaseServletUbi.doPost(DatabaseServletUbi.java:64)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1290)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:778)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:475)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1157)
at com.ibm.ws.webcontainer.webapp.WebApp.handleRequest(WebApp.java:4956)
at com.ibm.ws.webcontainer31.osgi.webapp.WebApp31.handleRequest(WebApp31.java:525)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.handleRequest(DynamicVirtualHost.java:315)
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:1014)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:280)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:967)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:359)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:318)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:471)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:405)
at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:285)
at com.ibm.ws.http.channel.internal.inbound.HttpICLReadCallback.complete(HttpICLReadCallback.java:66)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:504)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:574)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:929)
at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1018)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.IllegalArgumentException: There is no value matching -1 id
at com.ibm.coderally.api.ai.CheckpointAI.getById(CheckpointAI.java:109)
at com.ibm.coderally.web.service.SubmitVehicle.buildIntermediateRaceCar(SubmitVehicle.java:421)
at com.ibm.coderally.web.service.SubmitVehicle.doPost(SubmitVehicle.java:307)
at com.ibm.coderally.web.service.DatabaseServletUbi.doPost(DatabaseServletUbi.java:61)
... 25 more
Dump of callerThis
null
Make Vehicle
All Vehicles in the corner
Server.json
{"servers":[{"alias":"IBM Cloud","host":"http://www.coderallycloud.com","username":"someone","oauthType":null,"logoutURL":null,"port":80,"userId":77},{"alias":"NA Contest Server","host":"http://challenge-na.coderallycloud.com","username":"","oauthType":null,"logoutURL":null,"port":80,"userId":-1},{"alias":"EU Contest Server","host":"http://challenge-eu.coderallycloud.com","username":"","oauthType":null,"logoutURL":null,"port":80,"userId":-1},{"alias":"Brazil Contest Server","host":"http://challenge-br.coderallycloud.com","username":"","oauthType":null,"logoutURL":null,"port":80,"userId":-1},{"alias":"India Contest Server","host":"http://challenge-in.coderallycloud.com","username":"","oauthType":null,"logoutURL":null,"port":80,"userId":-1},{"alias":"China Contest Server","host":"http://challenge-cn.coderallycloud.com","username":"","oauthType":null,"logoutURL":null,"port":80,"userId":-1},{"alias":"MyOwnServer","host":"http://localhost","username":"Keepertje","oauthType":null,"logoutURL":null,"port":9080,"userId":1}]}
OK so you have not yet logged into the server according to the error message. To do so please click on the servers browser in Eclipse (the grey square icon next to the green + for creating new AIs). Once you have done that select your local server and login using the login button to the right - if the username does not exist it will create it and log you in. Once logged in you can request a new race and it should work (the user ID of -1 error message is displayed if you're not logged in - I'll have a look at getting that text changed for this circumstance to make it clearer).

Spark Hbase connection issue

Hitting with followiong error while i am trying to connect the hbase through spark(using newhadoopAPIRDD) in HDP 2.4.2.Already tried increasing the RPC time in hbase site xml file,still getting the same. any idea how to fix ?
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Wed Nov 16 14:59:36 IST 2016, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=71216: row 'scores,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hklvadcnc06.hk.standardchartered.com,16020,1478491683763, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:195)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:821)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:324)
at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:88)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:81)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1157)
at scb.Hbasetest$.main(Hbasetest.scala:85)
at scb.Hbasetest.main(Hbasetest.scala)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=71216: row 'scores,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hklvadcnc06.hk.standardchartered.com,16020,1478491683763, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hklvadcnc06.hk.standardchartered.com/10.20.235.13:16020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hklvadcnc06.hk.standardchartered.com/10.20.235.13:16020 is closing. Call id=9, waitTime=171
at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1281)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1252)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:346)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:320)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hklvadcnc06.hk.standardchartered.com/10.20.235.13:16020 is closing. Call id=9, waitTime=171
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1078)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:879)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:604)
16/11/16 14:59:36 INFO SparkContext: Invoking stop() from shutdown hook
I have added the hbase-conf path in hadoop classpath and the issue has been resolved .
Thanks!
Though a bit different context but I faced a similar type of exception while connecting hive with hbase.
Guess what! My hbase table's column mapping was mis-configure.
After I configured hbase tables's columns properly(Metadata of the table),the issue vanished.
WITH SERDEPROPERTIES("hbase.columns.mapping" = "personal data:,:key")

MyFaces: exceeding maximum Paramaters allowed per request

The application I'm involved in was deployed to test server (WebSphere 7) from where we're getting errors we've never seen before:
This is the message about exceeding maximum number of parameters allowed per request, additionally written with error:
28.01.2013 15:51:38 SEVERE exceeding maximum Paramaters allowed per request -> 1000 ,current parameterSize-> 1000 cannot add more.
28.01.2013 15:51:38 SEVERE An exception occurred
javax.faces.FacesException: java.lang.IllegalArgumentException
org.apache.myfaces.shared_impl.context.ExceptionHandlerImpl.wrap(ExceptionHandlerImpl.java:241)
org.apache.myfaces.shared_impl.context.ExceptionHandlerImpl.handle(ExceptionHandlerImpl.java:156)
org.apache.myfaces.lifecycle.LifecycleImpl.executePhase(LifecycleImpl.java:191)
org.apache.myfaces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118)
javax.faces.webapp.FacesServlet.service(FacesServlet.java:189)
com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1657)
com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1597)
com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:131)
org.primefaces.webapp.filter.FileUploadFilter.doFilter(FileUploadFilter.java:79)
com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:188)
com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:116)
I've never seen a parameter for configuring maximul parameters count, neither have I found it in http://myfaces.apache.org/core20/myfaces-impl/webconfig.html. So, what is this parameterSize param, where can I configure it?
The application is embedded as EAR, is using MyFaces 2.0.7 and PrimeFaces 3.4.
Thanks BalusC for quick response, through the exception comes from MyFaces class, it is the WebSphere setting
com.ibm.ws.webcontainer.maxParamPerRequest
You can use this property to change the maximum number of parameters allowed in your inbound requests, based on your applications and environment. The maximum number of parameters allowed per inbound request (GET or POST) defaults to 10000.
source: http://www-01.ibm.com/support/docview.wss?uid=swg21592923

Resources