is there any way I can connect to netezza database using Presto?
2018-08-08T13:42:06.124-0400 ERROR Query-20180808_174205_00000_qwr8d-165 org.postgresql.Driver Connection error:
org.postgresql.util.PSQLException: A connection could not be made using the requested protocol null.
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:57)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:194)
at org.postgresql.Driver.makeConnection(Driver.java:450)
at org.postgresql.Driver.connect(Driver.java:252)
at com.facebook.presto.plugin.jdbc.DriverConnectionFactory.openConnection(DriverConnectionFactory.java:59)
at com.facebook.presto.plugin.jdbc.BaseJdbcClient.getSchemaNames(BaseJdbcClient.java:119)
at com.facebook.presto.plugin.jdbc.JdbcMetadata.listSchemaNames(JdbcMetadata.java:72)
at com.facebook.presto.metadata.MetadataManager.listSchemaNames(MetadataManager.java:290)
at com.facebook.presto.connector.informationSchema.InformationSchemaMetadata.calculatePrefixesWithSchemaName(InformationSchemaMetadata.java:257)
at com.facebook.presto.connector.informationSchema.InformationSchemaMetadata.getTableLayouts(InformationSchemaMetadata.java:222)
at com.facebook.presto.metadata.MetadataManager.getLayouts(MetadataManager.java:350)
at com.facebook.presto.sql.planner.iterative.rule.PickTableLayout.planTableScan(PickTableLayout.java:203)
at com.facebook.presto.sql.planner.iterative.rule.PickTableLayout.access$200(PickTableLayout.java:61)
at com.facebook.presto.sql.planner.iterative.rule.PickTableLayout$PickTableLayoutWithoutPredicate.apply(PickTableLayout.java:186)
at com.facebook.presto.sql.planner.iterative.rule.PickTableLayout$PickTableLayoutWithoutPredicate.apply(PickTableLayout.java:153)
at com.facebook.presto.sql.planner.iterative.IterativeOptimizer.transform(IterativeOptimizer.java:165)
at com.facebook.presto.sql.planner.iterative.IterativeOptimizer.exploreNode(IterativeOptimizer.java:138)
at com.facebook.presto.sql.planner.iterative.IterativeOptimizer.exploreGroup(IterativeOptimizer.java:103)
at com.facebook.presto.sql.planner.iterative.IterativeOptimizer.exploreChildren(IterativeOptimizer.java:185)
at com.facebook.presto.sql.planner.iterative.IterativeOptimizer.exploreGroup(IterativeOptimizer.java:105)
at com.facebook.presto.sql.planner.iterative.IterativeOptimizer.exploreChildren(IterativeOptimizer.java:185)
at com.facebook.presto.sql.planner.iterative.IterativeOptimizer.exploreGroup(IterativeOptimizer.java:105)
at com.facebook.presto.sql.planner.iterative.IterativeOptimizer.optimize(IterativeOptimizer.java:94)
at com.facebook.presto.sql.planner.LogicalPlanner.plan(LogicalPlanner.java:140)
at com.facebook.presto.sql.planner.LogicalPlanner.plan(LogicalPlanner.java:129)
at com.facebook.presto.execution.SqlQueryExecution.doAnalyzeQuery(SqlQueryExecution.java:341)
at com.facebook.presto.execution.SqlQueryExecution.analyzeQuery(SqlQueryExecution.java:326)
at com.facebook.presto.execution.SqlQueryExecution.start(SqlQueryExecution.java:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
If I tried to connect using netezza driver I am getting following error.
etc/config.properties
plugin.bundles=com.facebook.presto:presto-netezza:0.117
2018-08-08T13:15:37.716-0400 ERROR main com.facebook.presto.server.PrestoServer Could not resolve artifact: io.airlift.resolver.DefaultArtifact#558127d2
java.lang.RuntimeException: Could not resolve artifact: io.airlift.resolver.DefaultArtifact#558127d2
at com.facebook.presto.server.PluginManager.createClassLoader(PluginManager.java:284)
at com.facebook.presto.server.PluginManager.buildClassLoaderFromCoordinates(PluginManager.java:274)
at com.facebook.presto.server.PluginManager.buildClassLoader(PluginManager.java:239)
at com.facebook.presto.server.PluginManager.loadPlugin(PluginManager.java:154)
at com.facebook.presto.server.PluginManager.loadPlugins(PluginManager.java:142)
at com.facebook.presto.server.PrestoServer.run(PrestoServer.java:117)
at com.facebook.presto.server.PrestoServer.main(PrestoServer.java:67)
There is no (official) Presto plugin for Netezza yet (that I know of).
You can write one, or pay someone else to write one.
Related
When trying to create a table with gcloud CLI (using the spanner emulator)
gcloud spanner databases ddl update db_name --instance=instance_name --ddl=$data
I'm attempting to add a check constraint where $data evaluates to:
CREATE TABLE my_table (
my_key STRING(36),
some_column STRING(100),
CONSTRAINT ck_my_table_some_column_enum CHECK (some_column = 'this' OR some_column = 'that'),
) PRIMARY KEY (my_key)
and keep getting the following error:
ERROR: (gcloud.spanner.databases.ddl.update) HttpError accessing <http://localhost:9020/v1/projects/spanner-emulator-test/instances/instance_name/databases/db_name/ddl?alt=json>: response: <{'content-type': 'application/json', 'trailer': 'Grpc-Trailer-Content-Type', 'date': 'Tue, 31 Aug 2021 22:53:52 GMT', 'transfer-encoding': 'chunked', 'status': 501}>, content <{"error":"Check Constraint is not implemented.","code":12,"message":"Check Constraint is not implemented."}>
This may be due to network connectivity issues. Please check your network settings, and the status of the service you are trying to reach.
First inclination might be to verify the emulator is running given the last part of the message. It is. And I've tried immediately executing the same DDL without the check constraint and it works perfectly - table is created.
DBeaver
I've also tried creating this table using the DBeaver db tool and get the following error:
org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [12]: UNIMPLEMENTED: io.grpc.StatusRuntimeException: UNIMPLEMENTED: Check Constraint is not implemented.
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:133)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:513)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:444)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:171)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:431)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:816)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:3440)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:118)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:171)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:116)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:4718)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory$JdbcSqlExceptionImpl: UNIMPLEMENTED: io.grpc.StatusRuntimeException: UNIMPLEMENTED: Check Constraint is not implemented.
at com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory.of(JdbcSqlExceptionFactory.java:222)
at com.google.cloud.spanner.jdbc.AbstractJdbcStatement.execute(AbstractJdbcStatement.java:259)
at com.google.cloud.spanner.jdbc.JdbcStatement.executeStatement(JdbcStatement.java:110)
at com.google.cloud.spanner.jdbc.JdbcStatement.execute(JdbcStatement.java:106)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:330)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:130)
... 12 more
Caused by: com.google.cloud.spanner.SpannerException: UNIMPLEMENTED: io.grpc.StatusRuntimeException: UNIMPLEMENTED: Check Constraint is not implemented.
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptionPreformatted(SpannerExceptionFactory.java:284)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:61)
at com.google.cloud.spanner.SpannerExceptionFactory.fromApiException(SpannerExceptionFactory.java:299)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:174)
at com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:110)
at com.google.cloud.spanner.DatabaseAdminClientImpl.lambda$updateDatabaseDdl$7(DatabaseAdminClientImpl.java:337)
at com.google.api.core.ApiFutures$ApiFunctionToGuavaFunction.apply(ApiFutures.java:240)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:223)
at com.google.common.util.concurrent.AbstractCatchingFuture$CatchingFuture.doFallback(AbstractCatchingFuture.java:211)
at com.google.common.util.concurrent.AbstractCatchingFuture.run(AbstractCatchingFuture.java:124)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
at com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:724)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.addListener(FluentFuture.java:110)
at com.google.common.util.concurrent.ForwardingListenableFuture.addListener(ForwardingListenableFuture.java:45)
at com.google.api.core.ApiFutureToListenableFuture.addListener(ApiFutureToListenableFuture.java:52)
at com.google.common.util.concurrent.AbstractCatchingFuture.create(AbstractCatchingFuture.java:41)
at com.google.common.util.concurrent.Futures.catching(Futures.java:297)
at com.google.api.core.ApiFutures.catching(ApiFutures.java:99)
at com.google.api.gax.longrunning.OperationFutureImpl.<init>(OperationFutureImpl.java:97)
at com.google.cloud.spanner.DatabaseAdminClientImpl.updateDatabaseDdl(DatabaseAdminClientImpl.java:335)
at com.google.cloud.spanner.connection.DdlClient.executeDdl(DdlClient.java:89)
at com.google.cloud.spanner.connection.DdlClient.executeDdl(DdlClient.java:84)
at com.google.cloud.spanner.connection.SingleUseTransaction.lambda$executeDdlAsync$1(SingleUseTransaction.java:272)
at io.grpc.Context$2.call(Context.java:596)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Suppressed: com.google.cloud.spanner.connection.AbstractBaseUnitOfWork$SpannerAsyncExecutionException: Execution failed for statement: CREATE TABLE my_table (
my_key STRING(36),
some_column STRING(100),
CONSTRAINT ck_my_table_some_column_enum CHECK (some_column = 'this' OR some_column = 'that'),
) PRIMARY KEY (my_key)
at com.google.cloud.spanner.connection.AbstractBaseUnitOfWork.executeStatementAsync(AbstractBaseUnitOfWork.java:233)
at com.google.cloud.spanner.connection.AbstractBaseUnitOfWork.executeStatementAsync(AbstractBaseUnitOfWork.java:145)
at com.google.cloud.spanner.connection.SingleUseTransaction.executeDdlAsync(SingleUseTransaction.java:281)
at com.google.cloud.spanner.connection.ConnectionImpl.executeDdlAsync(ConnectionImpl.java:1157)
at com.google.cloud.spanner.connection.ConnectionImpl.execute(ConnectionImpl.java:803)
at com.google.cloud.spanner.jdbc.AbstractJdbcStatement.execute(AbstractJdbcStatement.java:250)
at com.google.cloud.spanner.jdbc.JdbcStatement.executeStatement(JdbcStatement.java:110)
at com.google.cloud.spanner.jdbc.JdbcStatement.execute(JdbcStatement.java:106)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:330)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:130)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:513)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:444)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:171)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:431)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:816)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:3440)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:118)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:171)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:116)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:4718)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: io.grpc.StatusRuntimeException: UNIMPLEMENTED: Check Constraint is not implemented.
at io.grpc.Status.asRuntimeException(Status.java:535)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.cloud.spanner.spi.v1.SpannerErrorInterceptor$1$1.onClose(SpannerErrorInterceptor.java:100)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
... 3 more
Check constraints is available in the latest version of emulator. Can you please check and get back to us if there are issues.
https://console.cloud.google.com/gcr/images/cloud-spanner-emulator/global/emulator#sha256:ef0fd2ec74bb17b6c31e5010fb699fd0009b9829721d5159e6a11b6a40f881f1/details
Well... it looks like check constraints don't seem to work with the emulator. I tried this with a test db in the console and it worked. If anyone comes up with a solution to this, however, or if there's something I need to configure, please let me know! Thank you.
I need help figuring out why my Spark job that is using the spark-bigquery-connector failed during the write step to temporaryGcsBucket.
Here is the Spark code that is trying to write the data to BQ table
outDF.write
.format("bigquery")
.option("temporaryGcsBucket", "bq_temporary_folder")
.option("parentProject", "user_project")
.option("table", "user.destination_table")
.mode(SaveMode.Overwrite)
.save()
Here is the error log:
Caused by: java.lang.IllegalArgumentException: Wrong bucket: bq_temporary_folder, in path: gs://bq_temporary_folder/.spark-bigquery-application_1605725797163_1966-6f6cfc35-543b-4138-ae0e-649ce7c2ae56, expected bucket: null
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem.checkPath(GoogleHadoopFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:581)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.makeQualified(GoogleHadoopFileSystemBase.java:454)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.<init>(FileOutputCommitter.java:144)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.<init>(FileOutputCommitter.java:103)
at org.apache.parquet.hadoop.ParquetOutputCommitter.<init>(ParquetOutputCommitter.java:43)
at org.apache.parquet.hadoop.ParquetOutputFormat.getOutputCommitter(ParquetOutputFormat.java:442)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupCommitter(HadoopMapReduceCommitProtocol.scala:100)
at org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol.setupCommitter(SQLHadoopMapReduceCommitProtocol.scala:40)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupTask(HadoopMapReduceCommitProtocol.scala:217)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:226)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:175)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:405)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I'm using the SparkBQ connector spark-bigquery-with-dependencies_2.12-0.18.0.jar and the GCS connector gcs-connector-hadoop2-2.0.0-shaded.jar.
I do not think it's permission issue. Since the Spark job was able to write to the same GCS bucket by calling rdd.saveAsTextFile()
When I checked the GoogleHadoopFileSystem code, the expected bucket: null error message occurred because the rootBucket in the GoogleHadoopFileSystemBase class is not set.
But I don't know how I can get this field initialized in the first place.
Someone told me that there might be a mismatch GCS connector version between my Spark job and the Spark cluster. Even after confirming that the Spark job was using the GCS connector library configured for the Spark cluster, the job still failed.
I'm running out of idea what to try at this point.
Thank you in advance for any advices/helps.
I'm running DSE 5.0.5 on 2 identical clusters, all nodes being Spark+SOLR. On the first everything is ok, however on the second I got this message in /var/lib/cassandra/system.log:
INFO [PO-thread-0] 2017-04-02 19:26:43,176 DbInfoRollupPlugin.java:196 - Error retrieving node level db summary
It is reported as "INFO" however something is wrong and I can't figure it out. Partial stack trace follows:
INFO [PO-thread-0] 2017-04-02 19:26:43,176 DbInfoRollupPlugin.java:196 - Error retrieving node level db summary
java.util.concurrent.TimeoutException: null
at java.util.concurrent.FutureTask.get(FutureTask.java:205) [na:1.8.0_112]
at com.datastax.bdp.plugin.DeferringScheduler$DeferringTask.get(DeferringScheduler.java:115) ~[dse-core-5.0.5.jar:5.0.5]
at com.datastax.bdp.reporting.snapshots.db.DbInfoRollupPlugin$DbInfoRollupTask.doRollup(DbInfoRollupPlugin.java:192) [dse-core-5.0.5.jar:5.0.5]
at com.datastax.bdp.reporting.snapshots.db.DbInfoRollupPlugin$DbInfoRollupTask.run(DbInfoRollupPlugin.java:173) [dse-core-5.0.5.jar:5.0.5]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_112]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
Could you please indicate what to check to correct this issue.
Many Thanks
I figured out that is this property:
dse.db_info_rollup_node_query_timeout that has a default of 3000 ms.
However I don't know where to set it...
Pls advice,
Thx.,
Cristian
Please set performance_core_threads: 2 in dse.yaml. Note that this setting is probably missing in default dse.yaml so you will have to add it. Don't confuse it with
Although you receive that timeout exception, it should work.
It looks that IgniteRDD.sql only supports ANSI SQL, not SparkSQL or HiveQL?
When I am using IgniteRDD.sql(sqlText) which throws exception for bad sql, it is something that traces to org.h2.jdbc.JdbcSQLException which means that h2 parse is paring the sql?
Is my understanding correct?
Exception in thread "main" javax.cache.CacheException: Failed to parse query: select __VAL from Integer
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryTwoStep(IgniteH2Indexing.java:1137)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:732)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:730)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1666)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.queryTwoStep(GridQueryProcessor.java:730)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:700)
at org.apache.ignite.spark.IgniteRDD.sql(IgniteRDD.scala:147)
at com.xyz.ignite.spark.IgniteSparkTest$.main(IgniteSparkTest.scala:33)
at com.xyz.ignite.spark.IgniteSparkTest.main(IgniteSparkTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:613)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: org.h2.jdbc.JdbcSQLException: Column "__VAL" not found; SQL statement:
select __VAL from Integer [42122-191]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.expression.ExpressionColumn.optimize(ExpressionColumn.java:147)
at org.h2.command.dml.Select.prepare(Select.java:852)
at org.h2.command.Parser.prepareCommand(Parser.java:257)
IgniteRDD.sql and objectSql are not working on the top of Spark SQL - they just call Ignite's SQL engine.
Here is documentations with possibile instructions
Hitting with followiong error while i am trying to connect the hbase through spark(using newhadoopAPIRDD) in HDP 2.4.2.Already tried increasing the RPC time in hbase site xml file,still getting the same. any idea how to fix ?
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Wed Nov 16 14:59:36 IST 2016, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=71216: row 'scores,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hklvadcnc06.hk.standardchartered.com,16020,1478491683763, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:195)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:821)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:324)
at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:88)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:81)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD.count(RDD.scala:1157)
at scb.Hbasetest$.main(Hbasetest.scala:85)
at scb.Hbasetest.main(Hbasetest.scala)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=71216: row 'scores,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hklvadcnc06.hk.standardchartered.com,16020,1478491683763, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hklvadcnc06.hk.standardchartered.com/10.20.235.13:16020 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hklvadcnc06.hk.standardchartered.com/10.20.235.13:16020 is closing. Call id=9, waitTime=171
at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1281)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1252)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:346)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:320)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
... 4 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hklvadcnc06.hk.standardchartered.com/10.20.235.13:16020 is closing. Call id=9, waitTime=171
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1078)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:879)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:604)
16/11/16 14:59:36 INFO SparkContext: Invoking stop() from shutdown hook
I have added the hbase-conf path in hadoop classpath and the issue has been resolved .
Thanks!
Though a bit different context but I faced a similar type of exception while connecting hive with hbase.
Guess what! My hbase table's column mapping was mis-configure.
After I configured hbase tables's columns properly(Metadata of the table),the issue vanished.
WITH SERDEPROPERTIES("hbase.columns.mapping" = "personal data:,:key")