java.lang.NumberFormatException: For input string: "" while creating Dataframe - apache-spark

I am using cloudera vm. I have imported the products table from retail_db as a textfile with '|' as a fields separator (using sqoop).
Following is the table schema:
mysql> describe products;
product_id: int(11)
product_category_id: int(11)
product_name: varchar(45)
product_description: varchar(255)
product_price: float
product_image: varchar(255)
I want to create a Dataframe from this data.
I got no issue while using following code:
var products = sc.textFile("/user/cloudera/ex/products").map(r => {var p = r.split('|'); (p(0).toInt, p(1).toInt, p(2), p(3), p(4).toFloat, p(5))})
case class Products(productID: Int, productCategory: Int, productName: String, productDescription: String, productPrice: Float, productImage: String)
var productsDF = products.map(r => Products(r._1, r._2, r._3, r._4, r._5, r._6)).toDF()
productsDF.show()
But I got NumberFormatException exception for following code:
case class Products (product_id: Int, product_category_id: Int, product_name: String, product_description: String, product_price: Float, product_image: String)
val productsDF = sc.textFile("/user/cloudera/ex/products").map(_.split("|")).map(p => Products(p(0).trim.toInt, p(1).trim.toInt, p(2), p(3), p(4).trim.toFloat, p(5))).toDF()
productsDF.show()
java.lang.NumberFormatException: For input string: ""
Why is that I am getting exception in 2nd code even though it is same as that of 1st one?

The error is due to _.split("|") in second part of your code
You need to use _.split('|') or _.split("\\|") or _.split("""\|""") or Pattern.quote("|")
If you use "|" it tries to split with regular expression and | is or in the regular expression, so it does not matches anything and returns empty string ""
Hope this helps!

Related

Apache Spark - Performance with and without using Case Classes

I have 2 datasets, customers and orders
I want to join both on customer key.
I tried two approaches, one using case classes and one without.
Using Case classes: -> Just takes forever to complete - almost 11 minutes
case class Customer(custKey: Int, name: String, address: String, phone: String, acctBal: String, mktSegment: String, comment: String) extends Serializable
case class Order(orderKey: Int, custKey: Int, orderStatus: String, totalPrice: Double, orderDate: String, orderQty: String, clerk: String, shipPriority: String, comment: String) extends Serializable
val customers = sc.textFile("customersFile").map(row => row.split('|')).map(cust => (cust(0).toInt, Customer(cust(0).toInt, cust(1), cust(2), cust(3), cust(4), cust(5), cust(6))))
val orders = sc.textFile("ordersFile").map(row => row.split('|')).map(order => (order(1).toInt, Order(order(0).toInt, order(1).toInt, order(2), order(3).toDouble, order(4), order(5), order(6), order(7), order(8))))
orders.join(customers).take(1)
Without Case classes -- completes in few seconds
val customers = sc.textFile("customersFile").map(row => row.split('|'))
val orders = sc.textFile("ordersFile").map(row => row.split('|'))
val customersByCustKey = customers.map(row => (row(0), row)) // customer key is the first column in customers rdd, hence row(0)
val ordersByCustKey = orders.map(row => (row(1), row)) // customer key is the second column in orders rdd, hence row(1)
ordersByCustKey.join(customersByCustKey).take(1)
Want to know if this due to the time taken for serialization/deserialization while using case classes?
if yes, in which cases is it recommended to use case classes?
Job details using case classes:
Job details without case classes:

How to improve DataFrame UDF Which is Connecting to Hbase For every Row

I have a DataFrame where I need to create a column based on Values from Each row.
I iterate using UDF which process for each row and connects to HBase to get Data.
The UDF creates a connection, Returns Data, Closes a connection.
The process is slow as Zookeeper Hangs after few reads. I want to Pull data with only 1 open connection.
I tried mapwithpartition, But the connection is not passed as it's not serialized.
UDF:-
val lookUpUDF = udf((partyID: Int, brand: String, algorithm: String, bigPartyProductMappingTableName: String, env: String) => lookUpLogic.lkpBigPartyAccount(partyID, brand, algorithm, bigPartyProductMappingTableName, env))
How DataFrame Iterates:-
ocisPreferencesDF
.withColumn("deleteStatus", lookUpUDF(col(StagingBatchConstants.OcisPreferencesPartyId),
col(StagingBatchConstants.OcisPreferencesBrand), lit(EnvironmentConstants.digest_algorithm), lit
(bigPartyProductMappingTableName), lit(env)))
Main Login:-
def lkpBigPartyAccount(partyID: Int,
brand: String,
algorithm: String,
bigPartyProductMappingTableName: String,
envVar: String,
hbaseInteraction: HbaseInteraction = new HbaseInteraction,
digestGenerator: DigestGenerator = new DigestGenerator): Array[(String, String)] = {
AppInit.setEnvVar(envVar)
val message = partyID.toString + "-" + brand
val rowKey = Base64.getEncoder.encodeToString(message.getBytes())
val hbaseAccountInfo = hbaseInteraction.hbaseReader(bigPartyProductMappingTableName, rowKey, "cf").asScala
val convertMap: mutable.HashMap[String, String] = new mutable.HashMap[String, String]
for ((key, value) <- hbaseAccountInfo) {
convertMap.put(key.toString, value.toString)
}
convertMap.toArray
}
I expect to improve the code performance. What I'm hoping is to create a connection only once.

Spark CBO not showing rowcount for queries having partition column in query

I'm working on Spark 2.3.0 using Cost Based Optimizer(CBO) for computing statistics for queries on done on external tables.
I have a created a external table in spark :
CREATE EXTERNAL TABLE IF NOT EXISTS test (
eventID string,type string,exchange string,eventTimestamp bigint,sequenceNumber bigint
,optionID string,orderID string,side string,routingFirm string,routedOrderID string
,session string,price decimal(18,8),quantity bigint,timeInForce string,handlingInstructions string
,orderAttributes string,isGloballyUnique boolean,originalOrderID string,initiator string,leavesQty bigint
,symbol string,routedOriginalOrderID string,displayQty bigint,orderType string,coverage string
,result string,resultTimestamp bigint,nbbPrice decimal(18,8),nbbQty bigint,nboPrice decimal(18,8)
,nboQty bigint,reporter string,quoteID string,noteType string,definedNoteData string,undefinedNoteData string
,note string,desiredLeavesQty bigint,displayPrice decimal(18,8),workingPrice decimal(18,8),complexOrderID string
,complexOptionID string,cancelQty bigint,cancelReason string,openCloseIndicator string,exchOriginCode string
,executingFirm string,executingBroker string,cmtaFirm string,mktMkrSubAccount string,originalOrderDate string
,tradeID string,saleCondition string,executionCodes string,buyDetails_side string,buyDetails_leavesQty bigint
,buyDetails_openCloseIndicator string,buyDetails_quoteID string,buyDetails_orderID string,buyDetails_executingFirm string,buyDetails_executingBroker string,buyDetails_cmtaFirm string,buyDetails_mktMkrSubAccount string,buyDetails_exchOriginCode string,buyDetails_liquidityCode string,buyDetails_executionCodes string,sellDetails_side string,sellDetails_leavesQty bigint,sellDetails_openCloseIndicator string,sellDetails_quoteID string,sellDetails_orderID string,sellDetails_executingFirm string,sellDetails_executingBroker string,sellDetails_cmtaFirm string,sellDetails_mktMkrSubAccount string,sellDetails_exchOriginCode string,sellDetails_liquidityCode string,sellDetails_executionCodes string,tradeDate int,reason string,executionTimestamp bigint,capacity string,fillID string,clearingNumber string
,contraClearingNumber string,buyDetails_capacity string,buyDetails_clearingNumber string,sellDetails_capacity string
,sellDetails_clearingNumber string,receivingFirm string,marketMaker string,sentTimestamp bigint,onlyOneQuote boolean
,originalQuoteID string,bidPrice decimal(18,8),bidQty bigint,askPrice decimal(18,8),askQty bigint,declaredTimestamp bigint,revokedTimestamp bigint,awayExchange string,comments string,clearingFirm string )
PARTITIONED BY (date integer ,reporteIDs string ,version integer )
STORED AS PARQUET LOCATION '/home/test/'
I have computed statistics on the columns using the following command:
val df = spark.read.parquet("/home/test/")
val cols = df.columns.mkString(",")
val analyzeDDL = s"Analyze table events compute statistics for columns $cols"
spark.sql(analyzeDDL)
Now when I'm trying to get the statistics for the query :
val query = "Select * from test where date > 20180222"
Its giving me only size and not the rowCount :
scala> val exec = spark.sql(query).queryExecution
exec: org.apache.spark.sql.execution.QueryExecution =
== Parsed Logical Plan ==
'Project [*]
+- 'Filter ('date > 20180222)
+- 'UnresolvedRelation `test`
== Analyzed Logical Plan ==
eventID: string, type: string, exchange: string, eventTimestamp: bigint, sequenceNumber: bigint, optionID: string, orderID: string, side: string, routingFirm: string, routedOrderID: string, session: string, price: decimal(18,8), quantity: bigint, timeInForce: string, handlingInstructions: string, orderAttributes: string, isGloballyUnique: boolean, originalOrderID: string, initiator: string, leavesQty: bigint, symbol: string, routedOriginalOrderID: string, displayQty: bigint, orderType: string, ... 82 more fields
Project [eventID#797974, type#797975, exchange#797976, eventTimestamp#797977L, sequenceNumber#...
scala>
scala> val stats = exec.optimizedPlan.stats
stats: org.apache.spark.sql.catalyst.plans.logical.Statistics = Statistics(sizeInBytes=1.0 B, hints=none)
Am I missing any steps here? How can I get the rowcount for the query.
Spark-version : 2.3.0
Files in the table are in parquet format.
Update
I'm able to get the statistics for a csv file. Not able to get the same for a parquet file.
The difference between the execution plan for parquet and csv is format is that in csv we are getting a HiveTableRelation while for parquet its Relation.
Any idea why it so?

Convert a DataSet with single column to multiple column dataset in scala

I have a dataset which is a DataSet of String and it has the data
12348,5,233,234559,4
12348,5,233,234559,4
12349,6,233,234560,5
12350,7,233,234561,6
I want to split this single row and convert this to multiple columns which says RegionId, PerilId, Date, EventId, ModelId. How do i achieve this ?
you mean:
case class NewSet(RegionId: String, PerilId: String, Date: String, EventId: String, ModelId: String)
val newDataset = oldDataset.map(s:String => {
val strings = s.split(",")
NewSet(strings(0), strings(1), strings(2), string(3), strings(4)) })
Of course you should probably make the lambda function a little more robust...
If you have the data you specified in an RDD, then converting that to dataframe is pretty easy.
case class MyClass(RegionId: String, PerilId: String, Date: String,
EventId: String, ModelId: String)
val dataframe = sqlContext.createDataFrame(rdd,classOf[MyClass])
this dataframe will have all the columns with the column name corresponds to the variables of clas MyClass.

cassandra system.schema_columns insert in other table with modifications

i have the following select:
select * from system.schema_columns where keyspace_name = 'automotive' and columnfamily_name = 'cars';
and i want to insert data returned by this into another table with some modifications:
-i want to insert the data type of column
-remove audit columns like created_at, created_by etc..
in my sql we can do this by:
insert into formUtil(table_name, column_name, ordinal_position, is_nullable, data_type)
SELECT
col.table_name,
col.column_name,
col.ordinal_position,
case when col.is_nullable = 'YES' then 1 else 0 end,
col.data_type
from
information_schema.COLUMNS col
where
col.table_schema = 'i2cwac' and
col.column_name not in ('id','modifiedAt','modifiedBy','createdAt','createdBy') and
col.table_name = 'users';
how we can do this in cassandra ?
You can achieve this by using Spark
import java.nio.ByteBuffer;
import com.datastax.spark.connector._;
case class SchemaColumns(keyspaceName: String, tableName: String, clusteringOrder: String, columnNameBytes: ByteBuffer, kind: String, position: Int, type: String)
case class AnotherTable(keyspaceName: String, tableName: String, type: String)
sc.cassandraTable[SchemaColumns]("system", "schema_columns")
.map(schemaColumns -> AnotherTable(schemaColumns.keyspaceName,schemaColumns.tableName, schemaColumns.type)
.saveToCassandra("my_keyspace","another_table")

Resources