how to solve error like partition's table metadata are out of sync for table - singlestore

Recently i've faced a memsql leaf hardware error and we ended up missing partitions and their data due to the fact that we run a replication-1 memsql cluster.
Then we started noticing errors like:
"Java.sql.SQLException: Leaf Error (10.XXXX:3306): Partition's table metadata are out of sync for table"
despite having recreated the missing partitions.
Is there a way to approach this issue? Or i will have to drop data in all affected tables and import that from other sources ?

It sounds likely that the imported tables had mismatching table metadata, because the metadata changed at some point in between. You can try:
Recreate the tables. This can be done by insert-selecting the data into a newly created table (it may not be possible if no select queries work) or reloading the data from an external source.
Check the table schemas on the recreated partition vs the other partitions and see if you can find the mismatch - you could diff the show create tables. Then it may be possible to manually alter on the leaf partitions to correct them, or recreate the formerly missing partition with matching schemas.

Related

How to solve the maximum view depth error in Spark?

I have a very long task that creates a bunch of views using Spark SQL and I get the following error at some step: pyspark.sql.utils.AnalysisException: The depth of view 'foobar' exceeds the maximum view resolution depth (100).
I have been searching in Google and SO and couldn't find anyone with a similar error.
I have tried caching the view foobar, but that doesn't help. I'm thinking of creating temporary tables as a workaround, as I would like not to change the current Spark Configuration if possible, but I'm not sure if I'm missing something.
UPDATE:
I tried creating tables in parquet format to reference tables and not views, but I still get the same error. I applied that to all the input tables to the SQL query that causes the error.
If it makes a difference, I'm using ANSI SQL, not the python API.
Solution
Using parque tables worked for me after all. I spotted that I was still missing one table to persist so that's why it wouldn't work.
So I changed my SQL statements from this:
CREATE OR REPLACE TEMPORARY VIEW `VIEW_NAME` AS
SELECT ...
To:
CREATE TABLE `TABLE_NAME` USING PARQUET AS
SELECT ...
To move all the critical views to parquet tables under spark_warehouse/ - or whatever you have configured.
Note:
This will write the table on the master node's disk. Make sure you have enough disk or consider dumping in an external data store like s3 or what have you. Read this as an alternative - and now preferred - solution using checkpoints.

spark Listing leaf files fails with File does not exist

After a recent upgrade to HDP 3.1 now using spark 2.3.x instead of 2.2.x a query like:
spark.sql("SELECT * from mydb.mytable").filter('partition_date between "202001010000" and "202001020000").write.parquet("foo.out")
sometimes fails when reading from an HDFS backed hive table (no object storage).
You have to know that the underlying data (an EXTERNAL table in Hive) has a data retention period and any data older than this date will be deleted. Sometimes, this deletion might occur during the execution of the above-mentioned query. The deletion happens every 5 minutes.
Even though:
PartitionFilters: [isnotnull(partition_date#3099), (partition_date#3099 >= 202001010000), (partition_date#3099 <= 202001020000)]
partition filtering (predicate pushdown) seems to be enabled more than the desired partitions are read during the initial path traversal.
After the upgrade to 2.3, Spark shows in the UI the progress of listing file directories. Interestingly, we always get two entries. One for the oldest available directory, and one for the lower of the two boundaries of interest:
Listing leaf files and directories for 380 paths:
/path/to/files/on/hdfs/mydb.db/mytable/partition_date==202001010000/sub_part=0, ...
Listing leaf files and directories for 7100 paths:
/path/to/files/on/hdfs/mydb.db/mytable/partition_date=201912301300/sub_part=0, ...
Notice:
the logged number of files (308, 7100) both do not seem to reflect what a manual check would suggest
the job (sometimes) fails during the recursive listing of leaf files
the error message:
File does not exist: /path/to/files/on/hdfs/mydb.db/mytable/partition_date=201912301300/sub_part=0/file_name_unique_hash_timestamp.par
How can I force Spark to list only directories in the desired interval and not outside and potentially collide with the maximum data retention duration?
It looks like this is related:
Spark lists all leaf node even in partitioned data (though for S3)
#meniluca is correct in the sense that there must be a mismatch with what HDFS has available and the Hive metastore reports as what should be available.
However, instead of utilizing views which look a bit spooky/not easy to understand (in the context of file paths being included in the read operation), I prefer:
spark.read.option("basePath", "/path/to/mydb.db/mytable").orc("/path/to/mydb.db/mytable/partition_date=202001[1-2]*/*", "/path/to/mydb.db/mytable/partition_date=202001[3-4]*/*")
this forces spark to list the right (desired paths)
Have you tried with this?
spark.sql("MSCK REPAIR TABLE table_name")
it saved my life so many times.
Edit.
After the discussion in the comments, please try creating a view. The problem cannot be solved unless you re-run "select * from ..." immediately after the partition is dropped.
Creating a view will provide you with such a workaround:
CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name [COMMENT column_comment], ...) ]
[COMMENT view_comment]
[TBLPROPERTIES (property_name = property_value, ...)]
AS SELECT * FROM mytable;
from Hive/LanguageManual+DDL
Then replace the table with your view. If you don't have the right to create such view, please ask an admin to do it for you. They should accomodate this request as it seems you are trying to solve their problem IMO.

Write to a datepartitioned Bigquery table using the beam.io.gcp.bigquery.WriteToBigQuery module in apache beam

I'm trying to write a dataflow job that needs to process logs located on storage and write them in different BigQuery tables. Which output tables are going to be used depends on the records in the logs. So I do some processing on the logs and yield them with a key based on a value in the log. After which I group the logs on the keys. I need to write all the logs grouped on the same key to a table.
I'm trying to use the beam.io.gcp.bigquery.WriteToBigQuery module with a callable as the table argument as described in the documentation here
I would like to use a date-partitioned table as this will easily allow me to write_truncate on the different partitions.
Now I encounter 2 main problems:
The CREATE_IF_NEEDED gives an error because it has to create a partitioned table. I can circumvent this by making sure the tables exist in a previous step and if not create them.
If i load older data I get the following error:
The destination table's partition table_name_x$20190322 is outside the allowed bounds. You can only stream to partitions within 31 days in the past and 16 days in the future relative to the current date."
This seems like a limitation of streaming inserts, any way to do batch inserts ?
Maybe I'm approaching this wrong, and should use another method.
Any guidance as how to tackle these issues are appreciated.
Im using python 3.5 and apache-beam=2.13.0
That error message can be logged when one mixes the use of an ingestion-time partitioned table a column-partitioned table (see this similar issue). Summarizing from the link, it is not possible to use column-based partitioning (not ingestion-time partitioning) and write to tables with partition suffixes.
In your case, since you want to write to different tables based on a value in the log and have partitions within each table, forgo the use of the partition decorator when selecting which table (use "[prefix]_YYYYMMDD") and then have each individual table be column-based partitioned.

Cassandra 2.1 system schema missing

I have a six node cluster running cassandra 2.1.6. Yesterday I tried to drop a column family and received the message "Column family ID mismatch". I tried running nodetool repair but after repair was complete I got the same message. I then tried selecting from the column family but got the message "Column family not found". I ran the following query to get a list of all column families in my schema
select columnfamily_name from system.schema_columnfamilies where keyspace_name = 'xxx';
At this point I received the message
"Keyspace 'system' not found." I tried the command describe keyspaces and sure enough system was not in the list of keyspaces.
I then tried nodetool resetlocalshema on one of the nodes missing the system keyspace and when that failed to resolve the problem I tried nodetool rebuild but got the same messages after rebuild was complete. I tried stopping the nodes missing the system keyspace and restarted them, once the restart was completed the system keyspace was back and I was able to execute the above query successfully. However, the table I had tried to drop previously was not listed so I tried to recreate it and once again received the message Column family ID mismatch.
Finally, I shutdown the cluster and restarted it... and everything works as expected.
My questions are: How/why did the system keyspace disappear? What happened to the data being inserted into my column families while the system keyspace was missing from two of the six nodes? (my application didn't seem to have any problems) Is there a way I can detect problems like this automatically or do I have to manually check up on my keyspaces each day? Is there a way to fix the missing system keyspace and/or the Column family ID mismatch without restarting the entire cluster?
EDIT
As per Jim Meyers suggestion I queried the cf_id on each node of the cluster and confirmed that all nodes return the same value.
select cf_id from system.schema_columnfamilies where columnfamily_name = 'customer' allow filtering;
cf_id
--------------------------------------
cbb51b40-2b75-11e5-a578-798867d9971f
I then ran ls on my data directory and can see that there are multiple entries for a few of my tables
customer-72bc62d0ff7611e4a5b53386c3f1c9f9
customer-cbb51b402b7511e5a578798867d9971f
My application dynamically creates tables at run time (always using IF NOT EXISTS), seems likely that the application issued the same create table command on separate nodes at the same time resulting in the schema mismatch.
Since I've restarted the cluster everything seems to be working fine.
Is it safe to delete the extra file?
i.e. customer-72bc62d0ff7611e4a5b53386c3f1c9f9
1 The cause of this problem is a CREATE TABLE statement collision. Do not generate tables dynamically from multiple clients, even with IF NOT EXISTS. First thing you need to do is fix your code so that this does not happen. Just create your tables manually from cqlsh allowing time for the schema to settle. Always wait for schema agreement when modifying schema.
2 Here's the fix:
1) Change your code to not automatically re-create tables (even with IF NOT EXISTS).
2) Run a rolling restart to ensure schema matches across nodes. Run nodetool describecluster around your cluster. Check that there is only one schema version. 
ON EACH NODE:
3) Check your filesystem and see if you have two directories for the table in question in the data directory.
If THERE ARE TWO OR MORE DIRECTORIES:
4)Identify from schema_column_families which cf ID is the "new" one (currently in use). 
cqlsh -e "select * from system.schema_column_families"|grep
5) Move the data from the "old" one to the "new" one and remove the old directory. 
6) If there are multiple "old" ones repeat 5 for every "old" directory.
7) run nodetool refresh
IF THERE IS ONLY ONE DIRECTORY:
No further action is needed.
Futures
Schema collisions will continue to be an issue until - CASSANDRA-9424
Here's an example of it occurring on Jira and closed as not a problem CASSANDRA-8387
When you create a table in Cassandra it is assigned a unique id that should be the same on all nodes. Somehow it sounds like your table did not have the same id on all nodes. I'm not sure how that might happen, but maybe there was a glitch when the table was created and it was created multiple times, etc.
You should always use the IF NOT EXISTS clause when creating tables.
To check if your id's are consistent, try this on each node:
In cqlsh, run "SELECT cf_id from system.schema_columnfamilies where columnfamily_name ='yourtablename' allow filtering;
Look in the data directory under the keyspace name the table was created in. You should see a single directory for the table that looks like table_name-cf_id.
If things are correct you should see the same cf_id in all these places. If you see different ones, then somehow things got out of sync.
The other symptoms like the system keyspace disappearing I don't have a suggestion other than you hit some kind of bug in the software. If you get a lot of strange symptoms like this then perhaps you have some kind of data corruption. You might want to think about backing up your data in case things go south and you need to rebuild the cluster.

Cassandra Database Problem

I am using Cassandra database for large scale application. I am new to using Cassandra database. I have a database schema for a particular keyspace for which I have created columns using Cassandra Command Line Interface (CLI). Now when I copied dataset in the folder /var/lib/cassandra/data/, I was not able to access the values using the key of a particular column. I am getting message zero rows present. But the files are present. All these files are under extension, XXXX-Data.db, XXXX-Filter.db, XXXX-Index.db. Can anyone tell me how to access the columns for existing datasets.
(a) Cassandra doesn't expect you to move its data files around out from underneath it. You'll need to restart if you do any manual surgery like that.
(b) if you didn't also copy the schema definition it will ignore data files for unknown column families.
For what you are trying to achieve it may probably be better to export and import your SSTables.
You should have a look at bin/sstable2json and bin/json2sstable.
Documentation is there (near the end of the page): Cassandra Operations

Resources