Convert a DataSet with single column to multiple column dataset in scala - apache-spark

I have a dataset which is a DataSet of String and it has the data
12348,5,233,234559,4
12348,5,233,234559,4
12349,6,233,234560,5
12350,7,233,234561,6
I want to split this single row and convert this to multiple columns which says RegionId, PerilId, Date, EventId, ModelId. How do i achieve this ?

you mean:
case class NewSet(RegionId: String, PerilId: String, Date: String, EventId: String, ModelId: String)
val newDataset = oldDataset.map(s:String => {
val strings = s.split(",")
NewSet(strings(0), strings(1), strings(2), string(3), strings(4)) })
Of course you should probably make the lambda function a little more robust...

If you have the data you specified in an RDD, then converting that to dataframe is pretty easy.
case class MyClass(RegionId: String, PerilId: String, Date: String,
EventId: String, ModelId: String)
val dataframe = sqlContext.createDataFrame(rdd,classOf[MyClass])
this dataframe will have all the columns with the column name corresponds to the variables of clas MyClass.

Related

Apache Spark - Performance with and without using Case Classes

I have 2 datasets, customers and orders
I want to join both on customer key.
I tried two approaches, one using case classes and one without.
Using Case classes: -> Just takes forever to complete - almost 11 minutes
case class Customer(custKey: Int, name: String, address: String, phone: String, acctBal: String, mktSegment: String, comment: String) extends Serializable
case class Order(orderKey: Int, custKey: Int, orderStatus: String, totalPrice: Double, orderDate: String, orderQty: String, clerk: String, shipPriority: String, comment: String) extends Serializable
val customers = sc.textFile("customersFile").map(row => row.split('|')).map(cust => (cust(0).toInt, Customer(cust(0).toInt, cust(1), cust(2), cust(3), cust(4), cust(5), cust(6))))
val orders = sc.textFile("ordersFile").map(row => row.split('|')).map(order => (order(1).toInt, Order(order(0).toInt, order(1).toInt, order(2), order(3).toDouble, order(4), order(5), order(6), order(7), order(8))))
orders.join(customers).take(1)
Without Case classes -- completes in few seconds
val customers = sc.textFile("customersFile").map(row => row.split('|'))
val orders = sc.textFile("ordersFile").map(row => row.split('|'))
val customersByCustKey = customers.map(row => (row(0), row)) // customer key is the first column in customers rdd, hence row(0)
val ordersByCustKey = orders.map(row => (row(1), row)) // customer key is the second column in orders rdd, hence row(1)
ordersByCustKey.join(customersByCustKey).take(1)
Want to know if this due to the time taken for serialization/deserialization while using case classes?
if yes, in which cases is it recommended to use case classes?
Job details using case classes:
Job details without case classes:

How to improve DataFrame UDF Which is Connecting to Hbase For every Row

I have a DataFrame where I need to create a column based on Values from Each row.
I iterate using UDF which process for each row and connects to HBase to get Data.
The UDF creates a connection, Returns Data, Closes a connection.
The process is slow as Zookeeper Hangs after few reads. I want to Pull data with only 1 open connection.
I tried mapwithpartition, But the connection is not passed as it's not serialized.
UDF:-
val lookUpUDF = udf((partyID: Int, brand: String, algorithm: String, bigPartyProductMappingTableName: String, env: String) => lookUpLogic.lkpBigPartyAccount(partyID, brand, algorithm, bigPartyProductMappingTableName, env))
How DataFrame Iterates:-
ocisPreferencesDF
.withColumn("deleteStatus", lookUpUDF(col(StagingBatchConstants.OcisPreferencesPartyId),
col(StagingBatchConstants.OcisPreferencesBrand), lit(EnvironmentConstants.digest_algorithm), lit
(bigPartyProductMappingTableName), lit(env)))
Main Login:-
def lkpBigPartyAccount(partyID: Int,
brand: String,
algorithm: String,
bigPartyProductMappingTableName: String,
envVar: String,
hbaseInteraction: HbaseInteraction = new HbaseInteraction,
digestGenerator: DigestGenerator = new DigestGenerator): Array[(String, String)] = {
AppInit.setEnvVar(envVar)
val message = partyID.toString + "-" + brand
val rowKey = Base64.getEncoder.encodeToString(message.getBytes())
val hbaseAccountInfo = hbaseInteraction.hbaseReader(bigPartyProductMappingTableName, rowKey, "cf").asScala
val convertMap: mutable.HashMap[String, String] = new mutable.HashMap[String, String]
for ((key, value) <- hbaseAccountInfo) {
convertMap.put(key.toString, value.toString)
}
convertMap.toArray
}
I expect to improve the code performance. What I'm hoping is to create a connection only once.

Spark CBO not showing rowcount for queries having partition column in query

I'm working on Spark 2.3.0 using Cost Based Optimizer(CBO) for computing statistics for queries on done on external tables.
I have a created a external table in spark :
CREATE EXTERNAL TABLE IF NOT EXISTS test (
eventID string,type string,exchange string,eventTimestamp bigint,sequenceNumber bigint
,optionID string,orderID string,side string,routingFirm string,routedOrderID string
,session string,price decimal(18,8),quantity bigint,timeInForce string,handlingInstructions string
,orderAttributes string,isGloballyUnique boolean,originalOrderID string,initiator string,leavesQty bigint
,symbol string,routedOriginalOrderID string,displayQty bigint,orderType string,coverage string
,result string,resultTimestamp bigint,nbbPrice decimal(18,8),nbbQty bigint,nboPrice decimal(18,8)
,nboQty bigint,reporter string,quoteID string,noteType string,definedNoteData string,undefinedNoteData string
,note string,desiredLeavesQty bigint,displayPrice decimal(18,8),workingPrice decimal(18,8),complexOrderID string
,complexOptionID string,cancelQty bigint,cancelReason string,openCloseIndicator string,exchOriginCode string
,executingFirm string,executingBroker string,cmtaFirm string,mktMkrSubAccount string,originalOrderDate string
,tradeID string,saleCondition string,executionCodes string,buyDetails_side string,buyDetails_leavesQty bigint
,buyDetails_openCloseIndicator string,buyDetails_quoteID string,buyDetails_orderID string,buyDetails_executingFirm string,buyDetails_executingBroker string,buyDetails_cmtaFirm string,buyDetails_mktMkrSubAccount string,buyDetails_exchOriginCode string,buyDetails_liquidityCode string,buyDetails_executionCodes string,sellDetails_side string,sellDetails_leavesQty bigint,sellDetails_openCloseIndicator string,sellDetails_quoteID string,sellDetails_orderID string,sellDetails_executingFirm string,sellDetails_executingBroker string,sellDetails_cmtaFirm string,sellDetails_mktMkrSubAccount string,sellDetails_exchOriginCode string,sellDetails_liquidityCode string,sellDetails_executionCodes string,tradeDate int,reason string,executionTimestamp bigint,capacity string,fillID string,clearingNumber string
,contraClearingNumber string,buyDetails_capacity string,buyDetails_clearingNumber string,sellDetails_capacity string
,sellDetails_clearingNumber string,receivingFirm string,marketMaker string,sentTimestamp bigint,onlyOneQuote boolean
,originalQuoteID string,bidPrice decimal(18,8),bidQty bigint,askPrice decimal(18,8),askQty bigint,declaredTimestamp bigint,revokedTimestamp bigint,awayExchange string,comments string,clearingFirm string )
PARTITIONED BY (date integer ,reporteIDs string ,version integer )
STORED AS PARQUET LOCATION '/home/test/'
I have computed statistics on the columns using the following command:
val df = spark.read.parquet("/home/test/")
val cols = df.columns.mkString(",")
val analyzeDDL = s"Analyze table events compute statistics for columns $cols"
spark.sql(analyzeDDL)
Now when I'm trying to get the statistics for the query :
val query = "Select * from test where date > 20180222"
Its giving me only size and not the rowCount :
scala> val exec = spark.sql(query).queryExecution
exec: org.apache.spark.sql.execution.QueryExecution =
== Parsed Logical Plan ==
'Project [*]
+- 'Filter ('date > 20180222)
+- 'UnresolvedRelation `test`
== Analyzed Logical Plan ==
eventID: string, type: string, exchange: string, eventTimestamp: bigint, sequenceNumber: bigint, optionID: string, orderID: string, side: string, routingFirm: string, routedOrderID: string, session: string, price: decimal(18,8), quantity: bigint, timeInForce: string, handlingInstructions: string, orderAttributes: string, isGloballyUnique: boolean, originalOrderID: string, initiator: string, leavesQty: bigint, symbol: string, routedOriginalOrderID: string, displayQty: bigint, orderType: string, ... 82 more fields
Project [eventID#797974, type#797975, exchange#797976, eventTimestamp#797977L, sequenceNumber#...
scala>
scala> val stats = exec.optimizedPlan.stats
stats: org.apache.spark.sql.catalyst.plans.logical.Statistics = Statistics(sizeInBytes=1.0 B, hints=none)
Am I missing any steps here? How can I get the rowcount for the query.
Spark-version : 2.3.0
Files in the table are in parquet format.
Update
I'm able to get the statistics for a csv file. Not able to get the same for a parquet file.
The difference between the execution plan for parquet and csv is format is that in csv we are getting a HiveTableRelation while for parquet its Relation.
Any idea why it so?

Filter a list of case class objects based on a list of strings

I have a case class as this User(id:String, name: String, address: String, password: String) and another case class as Account(userId: String, accountId: String, roles: Set[String]). I need to filter a list of Account objects ( List[Account]) based on a list of userIds which I have as a List[String] in Scala. I have been struggling with this and tried doing this but couldn't. Any pointers on how should I do this would be really helpful.
Thanks !
I'm not sure if I understand your question correctly, but if you're only trying to keep only Accounts for which the userId is part of separate collection that you have, you can do it like this:
val accounts: List[Account] = ???
val idsToKeep: Set[String] = ???
accounts.filter(a => idsToKeep.contains(a.userId))
For the record, if you use the contains method a lot, you are better off using a Set[String] than a List[String] to store the ids to keep.

Merging two CSV Files in Azure Data Factory by using custom .NET activity

I have two CSV file which contains which contains many n-columns.I have to merge this two csv files with a single CSV file which having one unique column from the both input file.
I browsed thoroughly all the blogs and sites.All will result into using the custom .NET Activity.So i just go through this site
But still am not able to figure out which part in the C# Coding.Can any one share the code for how to merge this two CSV files using custom .NET Activity in Azure Data Factory.
Here is an example of how to join those two tab-separated files on Zip_Code column using U-SQL. This example assumes both files are held in Azure Data Lake Storage (ADLS). This script could easily be incorporated into a Data Factory pipeline:
// Get raw input from file A
#inputA =
EXTRACT
Date_received string,
Product string,
Sub_product string,
Issue string,
Sub_issue string,
Consumer_complaint_narrative string,
Company_public_response string,
Company string,
State string,
ZIP_Code string,
Tags string,
Consumer_consent_provided string,
Submitted_via string,
Date_sent_to_company string,
Company_response_to_consumer string,
Timely_response string,
Consumer_disputed string,
Complaint_ID string
FROM "/input/input48A.txt"
USING Extractors.Tsv();
// Get raw input from file B
#inputB =
EXTRACT Provider_ID string,
Hospital_Name string,
Address string,
City string,
State string,
ZIP_Code string,
County_Name string,
Phone_Number string,
Hospital_Type string,
Hospital_Ownership string,
Emergency_Services string,
Meets_criteria_for_meaningful_use_of_EHRs string,
Hospital_overall_rating string,
Hospital_overall_rating_footnote string,
Mortality_national_comparison string,
Mortality_national_comparison_footnote string,
Safety_of_care_national_comparison string,
Safety_of_care_national_comparison_footnote string,
Readmission_national_comparison string,
Readmission_national_comparison_footnote string,
Patient_experience_national_comparison string,
Patient_experience_national_comparison_footnote string,
Effectiveness_of_care_national_comparison string,
Effectiveness_of_care_national_comparison_footnote string,
Timeliness_of_care_national_comparison string,
Timeliness_of_care_national_comparison_footnote string,
Efficient_use_of_medical_imaging_national_comparison string,
Efficient_use_of_medical_imaging_national_comparison_footnote string,
Location string
FROM "/input/input48B.txt"
USING Extractors.Tsv();
// Join the two files on the Zip_Code column
#output =
SELECT b.Provider_ID,
b.Hospital_Name,
b.Address,
b.City,
b.State,
b.ZIP_Code,
a.Complaint_ID
FROM #inputA AS a
INNER JOIN
#inputB AS b
ON a.ZIP_Code == b.ZIP_Code
WHERE a.ZIP_Code == "36033";
// Output the file
OUTPUT #output
TO "/output/output.txt"
USING Outputters.Tsv(quoting : false);
This could also be converted into a U-SQL stored procedure with parameters for the filenames and Zip Code.
There are of course may ways to achieve this, each with their own pros and cons. The .net custom activity for example might feel more comfortable for someone with a .net background but you'll need some compute to run it on. Importing the files into an Azure SQL Database would be a good option for someone with a SQL / database background and an Azure SQL DB in the subscription.

Resources