I tried creating hive table on avro using avsc spec file and need to renam some of the columns . used alias but seems its not working. the columns are returned as null when i query the table
SPARK DATAFRAME TO SAVE DATA
val data=Seq(("john","adams"),("john","smith"))
val columns = Seq("fname","lname")
import spark.sqlContext.implicits._
val df=data.toDF(columns:_*)
df.write.format("avro").save("/test")
AVSC Spec file
{
"type" : "record",
"name" : "test",
"doc" : " import of test",
"fields" : [ {
"name" : "first_name",
"type" : [ "null", "string" ],
"default" : null,
"aliases" : [ "fname" ],
"columnName" : "fname",
"sqlType" : "12"
}, {
"name" : "last_name",
"type" : [ "null", "string" ],
"default" : null,
"aliases" : [ "lname" ],
"columnName" : "lname",
"sqlType" : "12"
} ],
"tableName" : "test"
}
EXTERNAL HIVE TABLE
create external table test
STORED AS AVRO
LOCATION '/test'
TBLPROPERTIES ('avro.schema.url'='/test.avsc');
HIVE QUERY
SELECT last_name from test;
returns null even though there is data in avro with the original name ie lname
I have a Spark-Kakfa-Strucutre streaming pipeline. Listening to a topic, which may have json records of varying schema.
Now I want to resolve the schema based on the key(x_y), and then apply to value portion to parse the json record.
So here key's 'y' part tells about the schema type.
I tried to get the schema string from udf and then pass to from_json() function.
But it fails with exception
org.apache.spark.sql.AnalysisException: Schema should be specified in DDL format as a string literal or output of the schema_of_json function instead of `schema`
Code used:
df.withColumn("data_type", element_at(split(col("key").cast("string"),"_"),1))
.withColumn("schema", schemaUdf($"data_type"))
.select(from_json(col("value").cast("string"), col("schema")).as("data"))
Schema demo:
{
"type" : "struct",
"fields" : [ {
"name" : "name",
"type" : {
"type" : "struct",
"fields" : [ {
"name" : "firstname",
"type" : "string",
"nullable" : true,
"metadata" : { }
}]
},
"nullable" : true,
"metadata" : { }
} ]
}
UDF used:
lazy val fetchSchema = (fileName : String) => {
DataType.fromJson(mapper.readTree(new File(fileName)).toString)
}
val schemaUdf = udf[DataType, String](fetchSchema)
Note: I am not using confluent feature.
I have updated avsc file to rename column like,
"fields" : [ {
"name" : "department_id",
"type" : [ "null", "int" ],
"default" : null
}, {
"name" : "office_name",
"type" : [ "null", "string" ],
"default" : null,
"aliases" : [ "department_name" ],
"columnName" : "department_name"
}
However in may avro file columns are like department_id : 10, department_name : "maths"
Now when i query like below,
select office_name from t
it always returns null values. Will it not return value from department_name in avro. Is there a way to have multiple names for column in avsc
From cloudera community, "we recommend to use the original name rather than the aliased name of the field in the table, as the Avro aliases are stripped during loading into Spark."
Schema with aliases,
val schema = new Schema.Parser().parse(new File("../spark-2.4.3-bin-hadoop2.7/examples/src/main/resources/user.avsc"))
schema: org.apache.avro.Schema = {"type":"record","name":"User","namespace":"example.avro","fields":[{"name":"name","type":"string","aliases":["customer_name"],"columnName":"customer_name"},{"name":"favorite_color","type":["string","null"],"aliases":["color"],"columnName":"color"}]}
Spark striping the aliases,
val usersDF = spark.read.format("avro").option("avroSchema",schema.toString).load("../spark-2.4.3-bin-hadoop2.7/examples/src/main/resources/users.avro")
usersDF: org.apache.spark.sql.DataFrame = [name: string, favorite_color: string]
I guess you can go with spark builtin features to rename a column, but if you find any other workaround let me know as well.
According to this issue, Cassandra's storage format was updated in 3.0.
If previously I could use cassandra-cli to see how the SSTable is built, to get something like this:
[default#test] list phonelists;
-------------------
RowKey: scott
=> (column=, value=, timestamp=1374684062860000)
=> (column=phonenumbers:bill, value='555-7382', timestamp=1374684062860000)
=> (column=phonenumbers:jane, value='555-8743', timestamp=1374684062860000)
=> (column=phonenumbers:patricia, value='555-4326', timestamp=1374684062860000)
-------------------
RowKey: john
=> (column=, value=, timestamp=1374683971220000)
=> (column=phonenumbers:doug, value='555-1579', timestamp=1374683971220000)
=> (column=phonenumbers:patricia, value='555-4326', timestamp=137468397122
What would the internal formal look like in the latest version of Cassandra? Could you provide an example?
What utility can I use to see the internal representation of the table in Cassandra in a way listed above, but with a new SSTable format?
All that I have found on the internet is that the partition header how stores column names, row stores clustering values and that there are no duplicated values.
How can I look into it?
Prior to 3.0 sstable2json was a useful utility for getting an understanding of how data is organized in SSTables. This feature is not currently present in cassandra 3.0, but there will be an alternative eventually. Until then myself and Chris Lohfink have developed an alternative to sstable2json (sstable-tools) for Cassandra 3.0 which you can use to understand how data is organized. There is some talk about bringing this into cassandra proper in CASSANDRA-7464.
A key differentiator between the storage format between older verisons of Cassandra and Cassandra 3.0 is that an SSTable was previously a representation of partitions and their cells (identified by their clustering and column name) whereas with Cassandra 3.0 an SSTable now represents partitions and their rows.
You can read about these changes in more detail by visiting this blog post by the primary developer of these changes who does a great job explaining it in detail.
The largest benefit you will see is that in the general case your data size will shrink (in some cases by a large factor), as a lot of the overhead introduced by CQL has been eliminated by some key enhancements.
Here's an example showing the difference between C* 2 and 3.
Schema:
create keyspace demo with replication = {'class': 'SimpleStrategy', 'replication_factor': 1};
use demo;
create table phonelists (user text, person text, phonenumbers text, primary key (user, person));
insert into phonelists (user, person, phonenumbers) values ('scott', 'bill', '555-7382');
insert into phonelists (user, person, phonenumbers) values ('scott', 'jane', '555-8743');
insert into phonelists (user, person, phonenumbers) values ('scott', 'patricia', '555-4326');
insert into phonelists (user, person, phonenumbers) values ('john', 'doug', '555-1579');
insert into phonelists (user, person, phonenumbers) values ('john', 'patricia', '555-4326');
sstable2json C* 2.2 output:
[
{"key": "scott",
"cells": [["bill:","",1451767903101827],
["bill:phonenumbers","555-7382",1451767903101827],
["jane:","",1451767911293116],
["jane:phonenumbers","555-8743",1451767911293116],
["patricia:","",1451767920541450],
["patricia:phonenumbers","555-4326",1451767920541450]]},
{"key": "john",
"cells": [["doug:","",1451767936220932],
["doug:phonenumbers","555-1579",1451767936220932],
["patricia:","",1451767945748889],
["patricia:phonenumbers","555-4326",1451767945748889]]}
]
sstable-tools toJson C* 3.0 output:
[
{
"partition" : {
"key" : [ "scott" ]
},
"rows" : [
{
"type" : "row",
"clustering" : [ "bill" ],
"liveness_info" : { "tstamp" : 1451768259775428 },
"cells" : [
{ "name" : "phonenumbers", "value" : "555-7382" }
]
},
{
"type" : "row",
"clustering" : [ "jane" ],
"liveness_info" : { "tstamp" : 1451768259793653 },
"cells" : [
{ "name" : "phonenumbers", "value" : "555-8743" }
]
},
{
"type" : "row",
"clustering" : [ "patricia" ],
"liveness_info" : { "tstamp" : 1451768259796202 },
"cells" : [
{ "name" : "phonenumbers", "value" : "555-4326" }
]
}
]
},
{
"partition" : {
"key" : [ "john" ]
},
"rows" : [
{
"type" : "row",
"clustering" : [ "doug" ],
"liveness_info" : { "tstamp" : 1451768259798802 },
"cells" : [
{ "name" : "phonenumbers", "value" : "555-1579" }
]
},
{
"type" : "row",
"clustering" : [ "patricia" ],
"liveness_info" : { "tstamp" : 1451768259908016 },
"cells" : [
{ "name" : "phonenumbers", "value" : "555-4326" }
]
}
]
}
]
While the output is larger (that is more of a consequence of the tool). The key differences you can see are:
Data is now a collection of Partitions and their Rows (which include cells) instead of a collection of Partitions and their Cells.
Timestamps are now at the row level (liveness_info) instead of at the cell level. If some row cells differentiate in their timestamps, the new storage engine does delta encoding to save space and associated the difference at the cell level. This also includes TTLs. As you can imagine this saves a lot of space if you have a lot of non-key columns as the timestamp does not need to be repeated.
The clustering information (in this case we are clustered on 'person') is now present at the Row level instead of cell level, which saves a bunch of overhead as the clustering column values don't have to be at the cell level.
I should note that in this particular example data case the benefits of the new storage engine aren't completely realized since there is only 1 non-clustering column.
There are a number of other improvements not shown here (like the ability to store row-level range tombstones).
Is it possible in log stash with use of filters to adjust resulting json to build nested documents?
I will need to build JSON like following:
[
{
"name" : "hd_used",
"columns" : ["value", "host", "mount"],
"points" : [
[23.2, "serverA", "/mnt"]
]
}
]