spark join causing column id ambiguity error - apache-spark

I have following dataframes:
accumulated_results_df
|-- company_id: string (nullable = true)
|-- max_dd: string (nullable = true)
|-- min_dd: string (nullable = true)
|-- count: string (nullable = true)
|-- mean: string (nullable = true)
computed_df
|-- company_id: string (nullable = true)
|-- min_dd: date (nullable = true)
|-- max_dd: date (nullable = true)
|-- mean: double (nullable = true)
|-- count: long (nullable = false)
Trying to do a join using spark-sql as below
val resultDf = accumulated_results_df.as("a").join(computed_df.as("c"),
( $"a.company_id" === $"c.company_id" ) && ( $"c.min_dd" > $"a.max_dd" ), "left")
Giving error as :
org.apache.spark.sql.AnalysisException: Reference 'company_id' is ambiguous, could be: a.company_id, c.company_id.;
What am i doing wrong here and How to fix this ?

Should work using the col function to reference correctly the alias dataframes and columns
val resultDf = (accumulated_results_df.as("a")
.join(
computed_df.as("c"),
(col("a.company_id") === col("c.company_id")) && (col("c.min_dd") > col("a.max_dd")),
"left"
)

I have fixed it something like below.
val resultDf = accumulated_results_df.join(computed_df.withColumnRenamed("company_id", "right_company_id").as("c"),
( accumulated_results_df("company_id" ) === $"c.right_company_id" && ( $"c.min_dd" > accumulated_results_df("max_dd") ) )
, "left")

Related

Pyspark structured streaming - Union data from 2 nested JSON

I have 2 kafka streaming dataframes. The spark schema looks like this:
root
|-- key: string (nullable = true)
|-- pmudata1: struct (nullable = true)
| |-- pmu_id: byte (nullable = true)
| |-- time: timestamp (nullable = true)
| |-- stream_id: byte (nullable = true)
| |-- stat: string (nullable = true)
and
root
|-- key: string (nullable = true)
|-- pmudata2: struct (nullable = true)
| |-- pmu_id: byte (nullable = true)
| |-- time: timestamp (nullable = true)
| |-- stream_id: byte (nullable = true)
| |-- stat: string (nullable = true)
How can I union all rows from both streams as they come by specific batch window? Positions of columns in both streams is same.
Each stream have different pmu_id value so I can differentiate records per that value.
UnionByName or union produces stream from single dataframe.
I would need to explode column names I guess, something like this but this is for scala.
Is there a way to automatically explode whole JSON in columns and union them?
You can use explode function only with array and map types. In your case, the column pmudata2 has type StructType so simply use star * to select all sub-fields like this:
df1 = df.selectExpr("key", "pmudata2.*")
#root
#|-- key: string (nullable = true)
#|-- pmu_id: byte (nullable = true)
#|-- time: timestamp (nullable = true)
#|-- stream_id: byte (nullable = true)
#|-- stat: string (nullable = true)

How to add an optional column inside struct field with pyspark

I currently had created a struct field in this way:
df = df.withColumn('my_struct', struct(
col('id').alias('id_test')
col('value').alias('value_test')
).alias('my_struct'))
The think is that now I need to add and extra field to my_struct called "optional". This field must be there when it exits and remove it when it's not. Sadly values like null/none not an option.
So far I have two different dataframes, one with the desired value and the column by id and another one without the value/column and all the information.
df_optional = df_optional.select('id','optional')
df = df.select('id','value','my_struct')
I want to add into df.my_struct the optional value when df_optional.id join df.join plus the rest.
Till this point I have this:
df_with_option = df.join(df_optional,on=['id'],how='inner') \
.withColumn('my_struct', struct(
col('id').alias('id_test')
col('value').alias('value_test')
col(optional).alias('optional')
).alias('my_struct')).drop('optional')
df_without = df.join(df_optional,on=['id'],how='leftanti') # it already have my_struct
But union should have similar columns so my code breaks.
df_result = df_without .unionByName(df_with_option)
I want to union both dataframes because at the end I write a json file partitioned by id:
df_result.repartitionByRange(df_result.count(),df['id']).write.format('json').mode('overwrite').save('my_path')
Those json files should have the 'optional' column when it has values, otherwise it should be out of the schema.
Any help will be appreciate.
--ADITIONAL INFO.
Schema input:
df_root
|-- id: string (nullable = true)
|-- optional: string (nullable = true)
df_optional
|-- id: string (nullable = true)
|-- value: string (nullable = true)
|-- my_struct: struct (nullable = true)
| |-- id: string (nullable = true)
| |-- value: string (nullable = true)
Schema output:
df_result
|-- id: string (nullable = true)
|-- value: string (nullable = true)
|-- my_struct: struct (nullable = true)
| |-- id: string (nullable = true)
| |-- value: string (nullable = true)
| |-- optional: string (nullable = true) (*)
(*) Only when it exists.
--UPDATE
I think that it jsut not posible in that way. I probably need to keep both dataframes appart and just write it two times. Something like this:
df_without.repartitionByRange(df_result.count(),df['id']).write.format('json').mode('overwrite').save('my_path')
df_with_option.repartitionByRange(df_result.count(),df['id']).write.format('json').mode('append').save('my_path')
Then I will had in my path the files by it's own way.

What is the compatible datatype for bigint in Spark and how can we cast bigint into a spark compatible datatype?

I am trying to move data from greenplum to HDFS using Spark. I can read the data successfully from the source table and the spark inferred schema of the dataframe (of the greenplum table) is:
DataFrame Schema:
je_header_id: long (nullable = true)
je_line_num: long (nullable = true)
last_updated_by: decimal(15,0) (nullable = true)
last_updated_by_name: string (nullable = true)
ledger_id: long (nullable = true)
code_combination_id: long (nullable = true)
balancing_segment: string (nullable = true)
cost_center_segment: string (nullable = true)
period_name: string (nullable = true)
effective_date: timestamp (nullable = true)
status: string (nullable = true)
creation_date: timestamp (nullable = true)
created_by: decimal(15,0) (nullable = true)
entered_dr: decimal(38,20) (nullable = true)
entered_cr: decimal(38,20) (nullable = true)
entered_amount: decimal(38,20) (nullable = true)
accounted_dr: decimal(38,20) (nullable = true)
accounted_cr: decimal(38,20) (nullable = true)
accounted_amount: decimal(38,20) (nullable = true)
xx_last_update_log_id: integer (nullable = true)
source_system_name: string (nullable = true)
period_year: decimal(15,0) (nullable = true)
period_num: decimal(15,0) (nullable = true)
The corresponding schema of the Hive table is:
je_header_id:bigint|je_line_num:bigint|last_updated_by:bigint|last_updated_by_name:string|ledger_id:bigint|code_combination_id:bigint|balancing_segment:string|cost_center_segment:string|period_name:string|effective_date:timestamp|status:string|creation_date:timestamp|created_by:bigint|entered_dr:double|entered_cr:double|entered_amount:double|accounted_dr:double|accounted_cr:double|accounted_amount:double|xx_last_update_log_id:int|source_system_name:string|period_year:bigint|period_num:bigint
Using the Hive table schema mentioned above, I created the below StructType from using the logic:
def convertDatatype(datatype: String): DataType = {
val convert = datatype match {
case "string" => StringType
case "bigint" => LongType
case "int" => IntegerType
case "double" => DoubleType
case "date" => TimestampType
case "boolean" => BooleanType
case "timestamp" => TimestampType
}
convert
}
Prepared Schema:
je_header_id: long (nullable = true)
je_line_num: long (nullable = true)
last_updated_by: long (nullable = true)
last_updated_by_name: string (nullable = true)
ledger_id: long (nullable = true)
code_combination_id: long (nullable = true)
balancing_segment: string (nullable = true)
cost_center_segment: string (nullable = true)
period_name: string (nullable = true)
effective_date: timestamp (nullable = true)
status: string (nullable = true)
creation_date: timestamp (nullable = true)
created_by: long (nullable = true)
entered_dr: double (nullable = true)
entered_cr: double (nullable = true)
entered_amount: double (nullable = true)
accounted_dr: double (nullable = true)
accounted_cr: double (nullable = true)
accounted_amount: double (nullable = true)
xx_last_update_log_id: integer (nullable = true)
source_system_name: string (nullable = true)
period_year: long (nullable = true)
period_num: long (nullable = true)
When I try to apply my newSchema on the dataframe Schema, I get an exception:
java.lang.RuntimeException: java.math.BigDecimal is not a valid external type for schema of bigint
I understand that it is trying to convert BigDecimal to Bigint and it fails, but could anyone tell me how do I cast the bigint to a spark compatible datatype ?
If not, how can I modify my logic to give proper datatypes in the case statement for this bigint/bigdecimal problem ?
Here by seeing your question, seems like you are trying to convert bigint value to big decimal, which is not right. Bigdecimal is a decimal that must have fixed precision (the maximum number of digits) and scale (the number of digits on right side of dot). And your's is seems like long value.
Here instead of using BigDecimal datatype, try to use LongType to convert bigint value correctly. See if this solve your purpose.

How to sum the values of a column in pyspark dataframe

I am working in Pyspark and I have a data frame with the following columns.
Q1 = spark.read.csv("Q1final.csv",header = True, inferSchema = True)
Q1.printSchema()
root
|-- index_date: integer (nullable = true)
|-- item_id: integer (nullable = true)
|-- item_COICOP_CLASSIFICATION: integer (nullable = true)
|-- item_desc: string (nullable = true)
|-- index_algorithm: integer (nullable = true)
|-- stratum_ind: integer (nullable = true)
|-- item_index: double (nullable = true)
|-- all_gm_index: double (nullable = true)
|-- gm_ra_index: double (nullable = true)
|-- coicop_weight: double (nullable = true)
|-- item_weight: double (nullable = true)
|-- cpih_coicop_weight: double (nullable = true)
I need the sum of all the elements in the last column (cpih_coicop_weight) to use as a Double in other parts of my program. How can I do it?
Thank you very much in advance!
If you want just a double or int as return, the following function will work:
def sum_col(df, col):
return df.select(F.sum(col)).collect()[0][0]
Then
sum_col(Q1, 'cpih_coicop_weight')
will return the sum.
I am new to pyspark so I am not sure why such a simple method of a column object is not in the library.
try this :
from pyspark.sql import functions as F
total = Q1.groupBy().agg(F.sum("cpih_coicop_weight")).collect()
In total, you should have your result.
This can also be tried.
total = Q1.agg(F.sum("cpih_coicop_weight")).collect()

SparkSQL - accesing nested structures Row( field1, field2=Row(..))

I need a help with nested structure in SparkSQL using sql method. I created a data frame on top of existing RDD (dataRDD) with a structure like this:
schema=StructType([ StructField("m",LongType()) ,
StructField("field2", StructType([
StructField("st",StringType()),
StructField("end",StringType()),
StructField("dr",IntegerType()) ]) )
])
printSchema() returns this:
root
|-- m: long (nullable = true)
|-- field2: struct (nullable = true)
| |-- st: string (nullable = true)
| |-- end: string (nullable = true)
| |-- dr: integer (nullable = true)
Creating the data frame from the data RDD and applying the schema works well.
df= sqlContext.createDataFrame( dataRDD, schema )
df.registerTempTable( "logs" )
But retrieving the data is not working:
res = sqlContext.sql("SELECT m, field2.st FROM logs") # <- This fails
...org.apache.spark.sql.AnalysisException: cannot resolve 'field.st' given input columns msisdn, field2;
res = sqlContext.sql("SELECT m, field2[0] FROM logs") # <- Also fails
...org.apache.spark.sql.AnalysisException: unresolved operator 'Project [field2#1[0] AS c0#2];
res = sqlContext.sql("SELECT m, st FROM logs") # <- Also not working
...cannot resolve 'st' given input columns m, field2;
So how can I access the nested structure in the SQL syntax?
Thanks
You had something else happening in your testing, because field2.st is the correct syntax:
case class field2(st: String, end: String, dr: Int)
val schema = StructType(
Array(
StructField("m",LongType),
StructField("field2", StructType(Array(
StructField("st",StringType),
StructField("end",StringType),
StructField("dr",IntegerType)
)))
)
)
val df2 = sqlContext.createDataFrame(
sc.parallelize(Array(Row(1,field2("this","is",1234)),Row(2,field2("a","test",5678)))),
schema
)
/* df2.printSchema
root
|-- m: long (nullable = true)
|-- field2: struct (nullable = true)
| |-- st: string (nullable = true)
| |-- end: string (nullable = true)
| |-- dr: integer (nullable = true)
*/
val results = sqlContext.sql("select m,field2.st from df2")
/* results.show
m st
1 this
2 a
*/
Look back at your error message: cannot resolve 'field.st' given input columns msisdn, field2 -- field vs. field2. Check your code again -- the names are not lining up.

Resources