Unable to import snapshot meta file in YugabyteDB, table not found - yugabytedb

[Question posted by a user on YugabyteDB Community Slack]
I’m getting the following error while importing the snapshot:
Error running import_snapshot: Invalid argument (yb/master/catalog_manager_ent.cc:1315): Unable to import snapshot meta file FOOBAR.snapshot: YSQL table not found: notes: OBJECT_NOT_FOUND (master error 3)
I am following this document - https://docs.yugabyte.com/preview/manage/backup-restore/snapshot-ysql/#restore-a-snapshot

After carefully reading the error, I realized it's failing for the table notes.
Schema import was failing because tservers were not distributed properly across the available AZs, once that got fixed.. it worked.

Related

Errors persisting after recovering YugabyteDB cluster

[Question posted by a user on YugabyteDB Community Slack]
We’re trying to do a postmortem on an issue we hit in our cluster. It looks like one of our 3 nodes went down and the other two were unable to process requests until it came back. Looking over the logs, I see this message a lot from both before and during the outage:
W0810 00:46:40.740047 3997211 leader_election.cc:285] T 00000000000000000000000000000000 P f65e3577ff4e42a3b935c36a99be1fb9 [CANDIDATE]: Term 7 pre-election: Tablet error from VoteRequest() call to peer df99aaa63d14414785aa9842fcf2fdc1: Invalid argument (yb/tserver/service_util.h:75): RequestConsensusVote: Wrong destination UUID requested. Local UUID: 55065b84a4df41ffac5841463871778a. Requested UUID: df99aaa63d14414785aa9842fcf2fdc1
I0810 00:46:40.740072 3997211 leader_election.cc:244] T 00000000000000000000000000000000 P f65e3577ff4e42a3b935c36a99be1fb9 [CANDIDATE]: Term 7 pre-election: Election decided. Result: candidate lost.
We, unfortunately, lost the logs from the node that went down due to a data loss issue on our side.Also, I’m actually still seeing the messages above even though the cluster has recovered so it looks like we’re still in a state.
What does this mean and does it prevent the cluster from electing a new leader?
The yb-master process recently running on prod-db-us-2 has a UUID of 55065b84a4df41ffac5841463871778a but the yb-master process running on prod-db-us-1 believes that the yb-master on prod-db-us-2 has a UUID of df99aaa63d14414785aa9842fcf2fdc1. This seems like a configuration issue.
My guess is that 55065b84a4df41ffac5841463871778a was originally df99aaa63d14414785aa9842fcf2fdc1. The UUID could change if the data directory is wiped.
You had a loss of data incident on prod-db-us-2 about a month and a half ago so that’s probably when the UUID changed.
Here’s the official documentation for replacing a failed master: https://docs.yugabyte.com/preview/troubleshoot/cluster/replace_master/
Alternatively, you could wipe 55065b84a4df41ffac5841463871778a and create a new yb-master using gflag instance_uuid_override to force it to initialize with uuid df99aaa63d14414785aa9842fcf2fdc1.

ERROR: SET TRANSACTION ISOLATION LEVEL must not be called in a subtransaction in YugabyteDB

[Question posted by a user on YugabyteDB Community Slack]
I am facing the following issue when I try to dump a database in YugabyteDB 2.13.0.1:
[yuga#yugadb-tserver1 ~]$ ./yugabyte-2.13.0.1/postgres/bin/ysql_dump -d ehrbase > ./backups/ehrbase_100.sql
ysql_dump: [archiver (db)] query failed: ERROR: SET TRANSACTION ISOLATION LEVEL must not be called in a subtransaction
ysql_dump: [archiver (db)] query was: SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, READ ONLY, DEFERRABLE
A similar error can be seen in https://github.com/yugabyte/yugabyte-db/issues/11630 and is the same issue, there is a fix that will come in the immediate next 2.13.* release. The fix is already done, just not yet released.

Can’t read JSON from CDC in YugabyteDB

[Question posted by a user on YugabyteDB Community Slack]
I am trying to read data in JSON format from CDC (YugabyteDB 2.13) for which I've used the following configuration:
connector.class=io.debezium.connector.yugabytedb.YugabyteDBConnector
database.streamid=88433e52543c4ecdb20934c6135beb3f
database.user=yugabyte
database.dbname=yugabyte
tasks.max=7
database.server.name=dbserver1
database.port=5433
database.master.addresses=<ip>:7100
database.hostname=<hostname>
database.password=yugabyte
table.include.list= sch.test
snapshot.mode=never
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
But I am unable to display data in JSON format. In fact the connector fails due
Schema for table 'sch.test' is missing (io.debezium.connector.yugabytedb.YugabyteDBChangeRecordEmitter:290)
[Worker-070086c4d98efddbd] [2022-04-12 11:04:15,978] ERROR [qorbital-test-json-msk|task-0] Producer failure (io.debezium.pipeline.ErrorHandler:31)
[Worker-070086c4d98efddbd] org.apache.kafka.connect.errors.ConnectException: Error while processing event at offset {transaction_id=null,
Is there a way I can fix it?
So the issue was with the version of YugabyteDB you were using (2.13.0), once it was released, there was a bug related to connector restarts which were causing a NullPointerException as the schema that is cached in the Debezium connector object was not persistent across the restart. So to fix the bug, we have added the logic to send the schema from the server-side every time there’s a restart or whenever the connector requests for the same, this will help in caching the schema and thus would resolve the NPE. Upgrading to a newer version will fix the issue.

Pyodbc Hive connection error in Python

I am trying to connect to the Hive from Python with Pyodbc ,I have installed "ODBC driver for apache hive" and I did configured everything and the connection is good.
When I am trying to execute my query through pyodbc,i am getting below error.
I tried with "use native query " option in driver configuration ,and also given the schema name,but it is still showing the same error.
Error
pyodbc.ProgrammingError: ('42S02', '[42S02] [Cloudera][SQLEngine] (31740) Table or view not found:
Thank you

populate_io_cache_on_flush is not a column defined in this metadata

While connecting to Cassandra 1.2.1 using Data-stax Java driver version 1.0.2, I am getting the error:
Exception in thread "main" java.lang.IllegalArgumentException: populate_io_cache_on_flush is not a column defined in this metadata
at com.datastax.driver.core.ColumnDefinitions.getIdx(ColumnDefinitions.java:268)
at com.datastax.driver.core.Row.isNull(Row.java:84)
at com.datastax.driver.core.TableMetadata$Options.<init>(TableMetadata.java:440)
at com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
at com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:124)
at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:88)
at com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:265)
at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:220)
at below line:
cluster = Cluster.builder().addContactPoint("localhost").build();
I tried deleted folder \var\lib\cassandra and then restart the cassandra server too which means there is no previous data. The server starts without any error but I am still getting the above error when I am trying to connect to it.
Ohk. Just discovered that it went away when I use latest version of Cassandra(1.2.8). So it might be because of version incompatibility.

Resources