Error Code PINOT_UNABLE_TO_FIND_BROKER :No valid brokers found - presto

I am trying to query pinot table data using presto, below are my configuration details.
started Pinot is one of the sit server.i.e. 10.184.160.52
Controller: 10.184.160.52:9000
server: 10.184.160.52:7000
broker: 10.184.160.52:8000
I have Presto on different server Ports are open b/w these 2 servers. i.e.10.184.160.53
Created One pinot.properties file inside presto/etc/catalog/pinot.properties.
connector.name=pinot
pinot.controller-urls=Controller_Host:9000
bin/launcher run ---> Loaded Pinot catalog.
Started Prestro with Pinot Segment.
./presto --server 10.184.160.53:8080 --catalog pinot
show catalogs;(able to see my Catalog)
pinot
show schemas; (able to see sachema also)
presto> show schemas;
Schema
--------------------
default
presto> use default;
USE
presto:default> show tables;----(able to see pinot tables:)
Table
------------------------------
test
test2
test3
(3 rows)
Query 20210519_124218_00061_vcz4u, FINISHED, 1 node
Splits: 19 total, 19 done (100.00%)
0:00 [3 rows, 98B] [10 rows/s, 340B/s]
but when I am doing select * from test ; its showing broker not found
presto:default> select * from test;
Query 20210519_124230_00062_vcz4u failed: No valid brokers found for test
Complete Presto Logs:
Error Code PINOT_UNABLE_TO_FIND_BROKER (84213767)
Stack Trace
io.prestosql.pinot.PinotException: No valid brokers found for test
at io.prestosql.pinot.client.PinotClient.getBrokerHost(PinotClient.java:285)
at io.prestosql.pinot.client.PinotClient.sendHttpGetToBrokerJson(PinotClient.java:185)
at io.prestosql.pinot.client.PinotClient.getRoutingTableForTable(PinotClient.java:302)
at io.prestosql.pinot.PinotSplitManager.generateSplitsForSegmentBasedScan(PinotSplitManager.java:72)
at io.prestosql.pinot.PinotSplitManager.getSplits(PinotSplitManager.java:167)
at io.prestosql.split.SplitManager.getSplits(SplitManager.java:87)
at io.prestosql.sql.planner.DistributedExecutionPlanner$Visitor.visitScanAndFilter(DistributedExecutionPlanner.java:203)
at io.prestosql.sql.planner.DistributedExecutionPlanner$Visitor.visitTableScan(DistributedExecutionPlanner.java:185)
at io.prestosql.sql.planner.DistributedExecutionPlanner$Visitor.visitTableScan(DistributedExecutionPlanner.java:156)
at io.prestosql.sql.planner.plan.TableScanNode.accept(TableScanNode.java:143)
at io.prestosql.sql.planner.DistributedExecutionPlanner.doPlan(DistributedExecutionPlanner.java:124)
at io.prestosql.sql.planner.DistributedExecutionPlanner.doPlan(DistributedExecutionPlanner.java:131)
at io.prestosql.sql.planner.DistributedExecutionPlanner.plan(DistributedExecutionPlanner.java:101)
at io.prestosql.execution.SqlQueryExecution.planDistribution(SqlQueryExecution.java:470)
at io.prestosql.execution.SqlQueryExecution.start(SqlQueryExecution.java:386)
at io.prestosql.execution.SqlQueryManager.createQuery(SqlQueryManager.java:237)
at io.prestosql.dispatcher.LocalDispatchQuery.lambda$startExecution$7(LocalDispatchQuery.java:143)
at io.prestosql.$gen.Presto_350____20210519_105836_2.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
I am not able to understand what is happening here why this issue is showing for select statement.Looks like pinot broker is not accepting queries.someOne Kindly Suggest, What is the issue here.

Update: This is because the connector does not support mixed case table names. Mixed case column names are supported. There is a pull request to add support for mixed case table names: https://github.com/trinodb/trino/pull/7630

Related

Key with version number error in YugabyteDB cluster?

[Question posted by a user on YugabyteDB Community Slack]
I'm just testing yugabyte under some negative tests and I'm facing a kind of issue. I'm running a 3 node cluster (master and tserver running on same node) When I stop one node and start it up again, T-server is not booting up under this log
F20220506 06:50:49 ../../src/yb/tserver/tablet_server_main.cc:220] Invalid argument (yb/util/universe_key_manager.cc:73): Could not init Tablet Manager: Failed to open tablet metadata for tablet: eb1e5457022f42c084148ca8fa4ba5c6: Failed to load tablet metadata for tablet id eb1e5457022f42c084148ca8fa4ba5c6: Co
uld not load Raft group metadata from /data/yugabyte/data/yb-data/tserver/tablet-meta/eb1e5457022f42c084148ca8fa4ba5c6: Key with version number c7b91fad-dd60-404f-8846-cab568e52468 does not exist.
# 0x7fcdace5ee4c yb::LogFatalHandlerSink::send()
# 0x7fcdaa5e28ee google::LogMessage::SendToLog()
# 0x7fcdaa5dfa7a google::LogMessage::Flush()
# 0x7fcdaa5e3169 google::LogMessageFatal::~LogMessageFatal()
# 0x4124ea yb::tserver::(anonymous namespace)::TabletServerMain()
# 0x7fcda6811825 __libc_start_main
# 0x410f99 _start
# (nil) (unknown)
The only way to start it up is remove the old data.
My steps were:
1.- Cluster up with 3 server
2.- Create a taable with 3 partition on different tablet id confirmed via UI)
3.- Insert 3 different row to diff partition
4.- Select * working fine
5.- Shut down one Table server
6.- Select * working fine
7.- Starting up the table server (error)
We are running using config file:
/usr/local/yugabyte/src/yugabyte-2.11.0.1/bin/./yb-tserver --flagfile /data/yugabyte/etc/tserver.conf
and config:
--tserver_master_addrs=ip1:7100,ip2:7100,ip3:7100
--rpc_bind_addresses=fqdn
--server_broadcast_addresses=ip1
--enable_ysql
--pgsql_proxy_bind_address=ip1:5433
--cql_proxy_bind_address=ip1:9042
--fs_data_dirs=/data/yugabyte/data
--placement_cloud=cloud
--placement_region=reg
--placement_zone=zone
--use_client_to_server_encryption=true
--certs_for_client_dir=/data/yugabyte/ssl
--certs_dir=/data/yugabyte/ssl
--use_node_to_node_encryption=true
--ysql_enable_auth=true
--log_dir=/data/yugabyte/logs
--ssl_protocols=tls12,tls13
--ysql_pg_conf=pgaudit.log='DDL',pgaudit.log_level=notice,pgaudit.log_client=ON,log_min_messages=notice,log_line_prefix='\%m \%r \%u \%d [\%p]'
Looks like the key is not in memory:
/usr/local/yugabyte/src/yugabyte-2.11.0.1/bin/yb-admin -master_addresses $master get_universe_config
{"version":2,"replicationInfo":{"liveReplicas":{"numReplicas":3,"placementBlocks":[{"cloudInfo":{"placementCloud":"cloud","placementRegion":"region","placementZone":"zone"},"minNumReplicas":1}]}},"clusterUuid":"dccea8cb-9790-48ba-8a05-6218a8e875a4","encryptionInfo":{"encryptionEnabled":true,"universeKeyRegistryEncoded":"sZTzNciYu6b1KxZonpJx6v7CDDvexiv1jh/HIEAOkpV4YRrIZbIK9jtajdEMmVEUy706+dmz8bmnZvy6/n33u+qS7fzRSOTPOlpxYI6+k1lSM6bu2DRTTffhZtaiKN15gy8a3ifaZV7xJ9QJ3z9SvFYzb96+KDWw","keyPath":"/data/yugabyte/rest/universe_key","latestVersionId":"c7b91fad-dd60-404f-8846-cab568e52468","keyInMemory":false}}
KeyInMemory: False
We are using encryption at rest but the the file with the key should be there.
Am I doing something wrong?
usr/local/yugabyte/src/yugabyte-2.11.0.1/bin/yb-admin -master_addresses fqdn:7100 all_masters_have_universe_key_in_memory 7e13c99e-5278-4abd-ab78-79f70d6c2679
Error running all_masters_have_universe_key_in_memory: Operation failed. Try again. (yb/tools/yb-admin_client_ent.cc:1027): Unable to check whether master has universe key in memory.: Node fqdn:7100 does not have universe key in memory
The above error seems to indicate the tserver is looking for a key - “c7b91fad-dd60-404f-8846-cab568e52468” in order to open the file, but can’t find it.
Realizing that there’s an actual key file on the masters. That’s been deprecated for using more secure in-memory keys. I just did a little digging through the code, and sure enough, we don’t actually support sending on-disk keys to tablet servers on restart.
The command you have to run is this one:
yb-admin -master_addresses <master-addresses> add_universe_keys_to_all_masters <key_id> <key_path>
and then right after that it should work:
/usr/local/yugabyte/src/yugabyte-2.11.0.1/bin/yb-admin -master_addresses $master all_masters_have_universe_key_in_memory 1
Node fqdn1:7100 has universe key in memory: 1
Node fqdn2:7100 has universe key in memory: 1
Node fqdn3:7100 has universe key in memory: 1

DSE Loading Data using Bulk Loader

Currently, I have successfully installed the necessary nodes and datacenters through the usage of the OpsCenter.
I have also generated the necessary table and Keyspace using Cassandra through DataStax Studio
KeySpace Generated
CREATE KEYSPACE graph_tables WITH REPLICATION = {'class':'SimpleStrategy', 'replication_factor':1};
Table Generated
CREATE TABLE people_node (id text, name text, age int, location 'PointType', gender text, dob timestamp, PRIMARY KEY(id));
Sample Data
id, name , age, location, gender, dob
0, Betsy, 15 , 10 15 , F , 1997-09-21T12:55:54
Assuming we have a node_1 with the IP Address 1.1.1.1 and second node called node_2 with the IP Address 2.2.2.2. These will be the two nodes that the OpsCenter have installed Cassandra on
From here I attempted to insert the necessary data using dsbulk
dsbulk load -url ./people_node_csv -k graph_tables -t people_node -h '1.1.1.1, 2.2.2.2 ' -header true
However, this results in an error stating "Operation Load_..... failed: Authentication error on host /1.1.1.1:9042: Host /1.1.1.1:9042 requires authentication, but no authenticator found in Cluster Configurations". I attempted to resolve this by adding in "driver.ssl.keystone.password = cassandra" as shown in the Document. But the error still persist. Any advise on solving this issue will be greatly appreciated.
You need to provide following settings as described in documentation:
-u - to specify user name
-p - to specify password
--driver.auth.provider DsePlainTextAuthProvider - to select corresponding authentication provider.

WSO2 Stream Processor Connecting to Cassandra gives error

I am trying to connect my siddhi application to cassandra data store by following the instructions given in the example program given in the editor.
I downloaded the datastax java jar(Osgi) and placed it in the WSO2/lib folder and now started the application. Now I get an error
> [2019-03-10_16-41-17_549] ERROR {org.wso2.siddhi.core.table.Table} - Error on 'Store-cassandra'. . Error while connecting to Table 'SweetProductionTable'. (Encoded)
java.lang.NullPointerException
at org.wso2.extension.siddhi.store.cassandra.CassandraEventTable.connect(CassandraEventTable.java:443)
at org.wso2.siddhi.core.table.Table.connectWithRetry(Table.java:388)
at org.wso2.siddhi.core.SiddhiAppRuntime.startWithoutSources(SiddhiAppRuntime.java:401)
at org.wso2.siddhi.core.SiddhiAppRuntime.start(SiddhiAppRuntime.java:376)
at org.wso2.carbon.siddhi.editor.core.internal.DebugRuntime.start(DebugRuntime.java:68)
at org.wso2.carbon.siddhi.editor.core.internal.DebugProcessorService.start(DebugProcessorService.java:37)
at org.wso2.carbon.siddhi.editor.core.internal.EditorMicroservice.start(EditorMicroservice.java:588)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invokeResource(HttpMethodInfo.java:187)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invoke(HttpMethodInfo.java:143)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.dispatchMethod(MSF4JHttpConnectorListener.java:218)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.lambda$onMessage$57(MSF4JHttpConnectorListener.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[2019-03-10_16-41-17_551] ERROR {org.wso2.siddhi.core.SiddhiAppRuntime} - Error starting Siddhi App 'Store-cassandra', triggering shutdown process. null (Encoded)
Below is the corresponding code
define stream SweetProductionStream (id int, name string);
#Store(type='cassandra' , cassandra.host='localhost' ,keyspace='production')
#index('uid')
#primaryKey('uid')
define table SweetProductionTable (uid int, name string);
/* Inserting event into the cassandra keyspace */
#info(name='query1')
from SweetProductionStream
select SweetProductionStream.id as uid, SweetProductionStream.name
insert into SweetProductionTable;
and below is the instructions given in the example
Prerequisites:
1) Ensure that Cassandra version 3 or above is installed on your machine.
2) Add the DataStax Java driver into {WSO2_SP_HOME}/lib as follows:
a) Download the DataStax Java driver from: http://central.maven.org/maven2/com/datastax/cassandra/cassandra-driver-core/3.3.2/cassandra-driver-core-3.3.2.jar
b) Use the "jartobundle" tool in {WSO2_SP_Home}/bin to extract and convert the above JARs into OSGi bundles.
For Windows: <SP_HOME>/bin/jartobundle.bat <PATH_OF_DOWNLOADED_JAR> <PATH_OF_CONVERTED_JAR>
For Linux: <SP_HOME>/bin/jartobundle.sh <PATH_OF_DOWNLOADED_JAR> <PATH_OF_CONVERTED_JAR>
Note: The driver given in the above link is a OSGi bundled one. Please skip this step if the jar is already OSGi bunbled.
c) Copy the converted bundles to the {WSO2_SP_Home}/lib directory.
3) Create a keyspace named 'production' in Cassanndra store.
4) In the store configuration of this application, replace 'username' and 'password' values with your Cassandra credentials.
5) Save this sample.
Executing the Sample:
1) Start the Siddhi application by clicking on 'Run'.
2) If the Siddhi application starts successfully, the following message is shown on the console
* Store-cassandra.siddhi - Started Successfully!
Note:
If you want to edit this application while it's running, stop the application, make your edits and save the application, and then start it again.
Testing the Sample:
1) Simulate single events:
a) Click on 'Event Simulator' (double arrows on left tab) and click 'Single Simulation'
b) Select 'Store-cassandra' as 'Siddhi App Name' and select 'searchSweetProductionStream' as 'Stream Name'.
c) Provide attribute values, and then click Send.
2) Send at least one event where the name matches a name value in the data you previously inserted into the SweetProductionTable. This will satisfy the 'on' condition of the join query.
3) Optionally, send events to the other corresponding streams to add, delete, update, insert, and search events.
Notes:
- After a change in the store, you can use the search stream to see whether the operation is successful.
- The Primary Key constraint in SweetProductionTable is disabled, because the name cannot be used as a PrimaryKey in a ProductionTable.
- You can use Siddhi functions to create a unique ID for the received events, which can then be used to apply the Primary Key constraint on the data store records. (http://wso2.github.io/siddhi/documentation/siddhi-4.0/#function)
Viewing the Results:
See the output for raw materials on the console. You can use searchSweetProductionStream to check for inserted, deleted, and updated events.
*/
Thanks in advance.
please provide your Cassandra credentials (username and password).
Eg: #store(type='cassandra' , cassandra.host='localhost', username='cassandra', password='cassandra',keyspace='production',
column.family='SweetProductionTable')
Please refer this sample.
Siddhi store casssandra

Azure SQL Data Warehouse: No catalog entry found for partition ID <id> in database <id>. The metadata is inconsistent. Run DBCC CHECKDB

I am working on moving stored procedures from an on-prem SQL Server database to an Azure SQL Data Warehouse (ASDW). Throughout the process I have had to work around a few missing features - time consuming but not impossible. One thing I have had to do is replace CTE's followed by MERGE statements with temp tables followed by UPDATE/INSERT/DELETE statements (since CTE's cannot be followed by these statements). At the beginning of each SP I check for the temp tables and delete them if they exist.
Today, I created another stored procedure in the ASDW without any temp tables (no updates/inserts/deletes so I left the CTE's in there), it "compiled", and I was able to run it without issue (returned an empty result set, as there is no data yet). I created another SP after this, and when I went to execute it, I got the following error:
...No catalog entry found for partition ID (id) in database 26. The metadata is inconsistent. Run DBCC CHECKDB to check for a metadata corruption...
I then went back to the first SP that I mentioned, and it gave me the same error, even though it had previously run without flaw.
I tried running DBCC CHECKDB as instructed but alas, it is not supported/doesn't work.
I dug around a lot, and what I ended up doing was scaling my database from 100DWU's to 500DWU's. I am at 0.16% of my database storage size limit, and there is barely any data anywhere (total DB size is <300MB).
Is there an explanation for this? If not, I can't in good conscience use this platform in a production environment.
Full error:
Msg 110802, Level 16, State 1, Line 1
110802;An internal DMS error occurred that caused this operation to fail.
Details: Exception: Microsoft.SqlServer.DataWarehouse.DataMovement.Workers.DmsSqlNativeException,
Message: SqlNativeBufferReader.Run, error in OdbcExecuteQuery: SqlState:
42000, NativeError: 608, 'Error calling: SQLExecDirect(this->GetHstmt(), (SQLWCHAR *)statementText, SQL_NTS), SQL return code: -1 | SQL Error Info:
SrvrMsgState: 1, SrvrSeverity: 16, Error <1>: ErrorMsg: [Microsoft][ODBC Driver 11 for SQL Server][SQL Server]No catalog entry found for partition ID
72057594047758336 in database 36. The metadata is inconsistent. Run DBCC
CHECKDB to check for a metadata corruption. | Error calling: pReadConn-
>ExecuteQuery(statementText, bufferFormat) | state: FFFF, number: 134148,
active connections: 100', Connection String: Driver={pdwodbc};APP=TypeC01-
DmsNativeReader:DB196\mpdwsvc (2504)- ODBC;Trusted_Connection=yes;AutoTranslate=no;Server=\\.\pipe\DB.196-
bb5f9dd884cf\sql\query
I'm sorry to hear about your experience with Azure SQL Data Warehouse. I believe this is a defect related to BIT data type handling for NOT NULL columns. Can you confirm that you have a BIT NOT NULL column (e.g., CREATE TABLE t1 (IsTrue BIT NOT NULL);)?
If so, a fix has been coded and is in testing for release. To mitigate this now, you can either switch to a TINY INT or remove the NOT NULL setting for the column.

Could not retrieve endpoint ranges: java.lang.IllegalArgumentException

I am tyring to load sstables to cassandra using sstableloader utility. But I am getting the following error.
> java.lang.IllegalArgumentException
java.lang.RuntimeException: Could not retrieve endpoint ranges:
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:338)
at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:106)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:275)
at org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543)
at org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:101)
at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:30)
at org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:50)
at org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:68)
at org.apache.cassandra.cql3.UntypedResultSet$Row.getMap(UntypedResultSet.java:287)
at org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1824)
at org.apache.cassandra.config.CFMetaData.fromThriftCqlRow(CFMetaData.java:1117)
at org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:330)
... 2 mor
the command I am using to load the sstable is
$bin/sstableloader -d nodename -u username -pw password path/to/sstable/keyspacename/tablename
this was working a few days back .I am not sure whats changed and how to debug it ?
I am using datastax.
I am loading the sstable from the same node which is in the cluster.i.e my source and destination node are same.
Has someone seen this error before ?
Cassandra version : 2.1
Any Help is appreciated.
The exception in the stack trace comes from this piece of code:
if (version >= Server.VERSION_3)
{
int size = input.getInt();
if (size < 0)
return null;
return ByteBufferUtil.readBytes(input, size); // HERE !
}
I'm wondering if you're loading sstables that have been generated by a Cassandra 2.1 or an older version .... Because the issue seems to be at the byte-encoding level.
There is also a possibility that your SSTables are corrupted.
How did you get those sstables ? From a copy of another Cassandra instance ? Generated by CQLSSTableWriter ?
I had this problem again, so debugged it a little for the root cause. The problem is if at any time you have altered your cassandra table by dropping some column. it triggers a bug for sstableLoader. That why dropping the table and creating it again works.

Resources