why on cassandra - The key of the entities was not migrated? - cassandra

I am new on Cassandra,
The "hash_key" is different than the "key_desc_id" and I have some issues because of that. Any help with what should i do to fix this issue?
id | name
---------------+----------------------------------------------
1638430493227 | default_cd77c646-cfa4-40b9-9136-5ce83fc586c9
1638430494138 | k2_hash_key
select id,key_desc_id from k2view_cc.entity;
id | key_desc_id
------------------------------------------------------------------+---------------
D54C994CF3CBDA1F70E37853B07388D0C65E824F1BDF3D4F83731513EB8A5399 | 1572012180007
DD5C1E2D9395CA6B1B5ABDF87C03392493F03328F37830045749EA1AE725AF91 | 1572012180007
3467EA971FC018EB561F07F57547CF17E618B76D93B4F8471280C3CF8FB32D58 | 1572012180007
EB4A8D07DF41EDDB49D530BE39B8D8850AFF7D108787CE1EF46EE27D00E99E57 | 1572012180007
AB92F2C90E3337EED0B68B2EC1B56B6FA45A9461C711406B7ED01BE7B29F75EE | 1572012180007
9BC6D9862B68C8AE810CC1A00BFF87ADF6A95A262C98987780807D458682330C | 1572012180007
AFE8B598517CCFC09E71DF10417DAC3B1002731F43334CA6872BC4CEA9A07D75 | 1572012180007

Related

Problems encountered in installing Fabric Explore

I installed it according to the guide on the official website.fabricexplorer But I may be having problems initializing the database.
MacOS
$ cd blockchain-explorer/app/persistence/fabric/postgreSQL/db
$ ./createdb.sh
$ createdb whoami
psql -c '\l'
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
----------------+----------+----------+---------+-------+-----------------------
fabricexplorer | hyy | UTF8 | C | C |
heyueyue | heyueyue | UTF8 | C | C |
postgres | postgres | UTF8 | C | C | =Tc/postgres +
| | | | | postgres=CTc/postgres
template0 | heyueyue | UTF8 | C | C | =c/heyueyue +
| | | | | heyueyue=CTc/heyueyue
template1 | heyueyue | UTF8 | C | C | =c/heyueyue +
| | | | | heyueyue=CTc/heyueyue
(5 rows)
heyueyue#heyueyuedeMacBook-Pro test-network % psql $DATABASE_DATABASE -c '\d'
Did not find any relations.
However, there are tables in the database fabricexplorer.
fabricexplorer-# \d
List of relations
Schema | Name | Type | Owner
--------+---------------------------+----------+-------
public | blocks | table | hyy
public | blocks_id_seq | sequence | hyy
public | chaincodes | table | hyy
public | chaincodes_id_seq | sequence | hyy
public | channel | table | hyy
public | channel_id_seq | sequence | hyy
public | orderer | table | hyy
public | orderer_id_seq | sequence | hyy
public | peer | table | hyy
public | peer_id_seq | sequence | hyy
public | peer_ref_chaincode | table | hyy
public | peer_ref_chaincode_id_seq | sequence | hyy
public | peer_ref_channel | table | hyy
public | peer_ref_channel_id_seq | sequence | hyy
public | transactions | table | hyy
public | transactions_id_seq | sequence | hyy
public | users | table | hyy
public | users_id_seq | sequence | hyy
public | write_lock | table | hyy
And I start the service , I can’t login with the correct user and password.
enter image description here
[2022-06-19T16:29:11.875] [DEBUG] PgService - the getRowsBySQlCase select * from channel where name=$1 and channel_genesis_hash=$2 and network_name = $3
Can anybody help?

Error while querying hive table with map datatype in Spark SQL. But working while executing in HiveQL

I have hive table with below structure
+---------------+--------------+----------------------+
| column_value | metric_name | key |
+---------------+--------------+----------------------+
| A37B | Mean | {0:"202006",1:"1"} |
| ACCOUNT_ID | Mean | {0:"202006",1:"2"} |
| ANB_200 | Mean | {0:"202006",1:"3"} |
| ANB_201 | Mean | {0:"202006",1:"4"} |
| AS82_RE | Mean | {0:"202006",1:"5"} |
| ATTR001 | Mean | {0:"202007",1:"2"} |
| ATTR001_RE | Mean | {0:"202007",1:"3"} |
| ATTR002 | Mean | {0:"202007",1:"4"} |
| ATTR002_RE | Mean | {0:"202007",1:"5"} |
| ATTR003 | Mean | {0:"202008",1:"3"} |
| ATTR004 | Mean | {0:"202008",1:"4"} |
| ATTR005 | Mean | {0:"202008",1:"5"} |
| ATTR006 | Mean | {0:"202009",1:"4"} |
| ATTR006 | Mean | {0:"202009",1:"5"} |
I need to write a spark sql query to filter based on Key column with NOT IN condition with commination of both keys.
The following query works fine in HiveQL in Beeline
select * from your_data where key[0] between '202006' and '202009' and key NOT IN ( map(0,"202009",1,"5") );
But when i try the same query in Spark SQL. I am getting error
cannot resolve due to data type mismatch: map<int,string>
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$3.applyOrElse(CheckAnalysis.scala:115)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$3.applyOrElse(CheckAnalysis.scala:107)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:278)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:278)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:277)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:275)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:275)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:275)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:275)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:275)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
Please help!
I got the answer from different question which i raised before. This query is working fine
select * from your_data where key[0] between 202006 and 202009 and NOT (key[0]="202009" and key[1]="5" );

Between statement is not working on Hive Map column - Spark SQL

I am have following hive table. key column has map value(key-value pairs). I am executing spark sql query with between statement on key column, but it is returning null records.
+---------------+--------------+----------------------+---------+
| column_value | metric_name | key |key[0] |
+---------------+--------------+----------------------+---------+
| A37B | Mean | {0:"202009",1:"12"} | 202009 |
| ACCOUNT_ID | Mean | {0:"202009",1:"12"} | 202009 |
| ANB_200 | Mean | {0:"202009",1:"12"} | 202009 |
| ANB_201 | Mean | {0:"202009",1:"12"} | 202009 |
| AS82_RE | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR001 | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR001_RE | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR002 | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR002_RE | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR003 | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR004 | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR005 | Mean | {0:"202009",1:"12"} | 202009 |
| ATTR006 | Mean | {0:"202009",1:"12"} | 202008 |
I am running below spark sql query
SELECT column_value, metric_name,key FROM table where metric_name = 'Mean' and column_value IN ('ATTR003','ATTR004','ATTR005') and key[0] between 202009 and 202003
Query is not returning any records. Instead of between statement, if i use IN (202009,202007,202008,202006,202005,202004,202003) statement it is returning result.
Need help!
Try other way around between values. E.g. between 202003 and 202009.

Find all occurrences from a string - Presto

I have the following as rows in HIVE (HDFS) and using Presto as the Query Engine.
1,#markbutcher72 #charlottegloyn Not what Belinda Carlisle thought. And yes, she was singing about Edgbaston.
2,#tomkingham #markbutcher72 #charlottegloyn It's true the garden of Eden is currently very green...
3,#MrRhysBenjamin #gasuperspark1 #markbutcher72 Actually it's Springfield Park, the (occasional) home of the might
The requirement is to do get the following through Presto Query. How can we get this please
1,markbutcher72
1,charlottegloyn
2,tomkingham
2,markbutcher72
2,charlottegloyn
3,MrRhysBenjamin
3,gasuperspark1
3,markbutcher72
select t.id
,u.token
from mytable as t
cross join unnest (regexp_extract_all(text,'(?<=#)\S+')) as u(token)
;
+----+----------------+
| id | token |
+----+----------------+
| 1 | markbutcher72 |
| 1 | charlottegloyn |
| 2 | tomkingham |
| 2 | markbutcher72 |
| 2 | charlottegloyn |
| 3 | MrRhysBenjamin |
| 3 | gasuperspark1 |
| 3 | markbutcher72 |
+----+----------------+

through associations in sails.js

A while ago I asked how to perform the "Through Associations".
I have the following tables :
genres
+-----------+--------------+------+-----+
| Field | Type | Null | Key |
+-----------+--------------+------+-----+
| id | int(6) | NO | PRI |
| slug | varchar(255) | NO | |
| parent_id | int(11) | YES | MUL |
+-----------+--------------+------+-----+
genres_radios
+----------+--------+------+-----+
| Field | Type | Null | Key |
+----------+--------+------+-----+
| genre_id | int(6) | NO | MUL |
| radio_id | int(6) | NO | MUL |
+----------+--------+------+-----+
radios
+-----------+--------------+------+-----+
| Field | Type | Null | Key |
+-----------+--------------+------+-----+
| id | int(5) | NO | PRI |
| slug | varchar(100) | NO | |
| url | varchar(100) | NO | |
+-----------+--------------+------+-----+
The answer is there : Sails.js associations.
Now I was wondering, if I had a new field in the genres_radios table, for example:
genres_radios
+----------+--------+------+-----+
| Field | Type | Null | Key |
+----------+--------+------+-----+
| genre_id | int(6) | NO | MUL |
| new_field| int(10)| NO | |
| radio_id | int(6) | NO | MUL |
+----------+--------+------+-----+
How would I do to get that attribute while making the join?
It is not implemented yet. Quoting Waterline's documentation :
Many-to-Many Through Associations
Many-to-Many through associations behave the same way as many-to-many
associations with the exception of the join table being automatically
created for you. This allows you to attach additional attributes onto
the relationship inside of the join table.
Coming Soon

Resources