Spark: load csv file with a different schema - apache-spark

I have a csv file like this :
product price,product origin,phone number
20,US,200200
I would like to load the csv file using a new schema so that my dataset should look like this:
|price | origin | number |
|20 | US | 200200 |
I tried to create a schema using structfield :
sparkSession.read().format("csv")
.option("header", "false")
.option("delimiter", ",")
.schema(myScheme).load(csv)
but what I got is like this:
|price | origin | number |
|200200 | US | 20 |
What is the correct way to load the csv with a new scheme with correct column orders ?

Using a csv file with the exact contents that you posted in your question:
product price,product origin,phone number
20,US,200200
You should be able to create a schema by using types from org.apache.spark.sql.types._. You could do something like this:
import org.apache.spark.sql.types._
val mySchema = new StructType()
.add("product price", IntegerType)
.add("product origin", StringType)
.add("phone number", StringType)
val df = spark
.read
.option("header", "true")
.schema(mySchema)
.csv("./simpleCSV.csv")
df.show
+-------------+--------------+------------+
|product price|product origin|phone number|
+-------------+--------------+------------+
| 20| US| 200200|
+-------------+--------------+------------+
df.printSchema
root
|-- product price: integer (nullable = true)
|-- product origin: string (nullable = true)
|-- phone number: string (nullable = true)
Hope this helps!

Related

Why did Spark interchange values of two columns?

Please can someone explain why spark interchanges the values of two columns when querying a DataFrame?
The values of ProposedAction are returned for SimpleMatchRate vise versa.
Here is the code sample:
import os
os.environ["PYARROW_IGNORE_TIMEZONE"] = "1"
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType as ST, StructField as SF, StringType as STR
spark = (SparkSession.builder
.master("local")
.appName("Fuzzy")
.config("spark.jars", "../jars/mysql-connector-java-8.0.29.jar")
.config("spark.driver.extraClassPath", "../jars/mysql-connector-java-8.0.29.jar")
.getOrCreate())
customschema = ST([
SF("Matched", STR()),
SF("MatchRate", STR()),
SF("ProposedAction", STR()), # e.g. is_new
SF("SimpleMatchRate", STR()), # e.g. 76.99800
SF("Status", STR())])
files = [file for file in glob.glob('../source_files/*fuzzy*')]
df = spark.read.csv(files, sep="\t", header="true", encoding="UTF-8", schema=customschema)
df.printSchema()
root
|-- Matched: string (nullable = true)
|-- MatchRate: string (nullable = true)
|-- ProposedAction: string (nullable = true)
|-- SimpleMatchRate: string (nullable = true)
|-- Status: string (nullable = true)
Now if I try to query the df as a table:
df.createOrReplaceTempView("tmp_table")
spark.sql("""SELECT MatchRate, ProposedAction, SimpleMatchRate
FROM tmp_table LIMIT 5""").show()
I get:
+-----------+----------------+-----------------+
| MatchRate | ProposedAction | SimpleMatchRate |
+-----------+----------------+-----------------+
| 0.043169 | 0.000000 | is_new |
| 88.67153 | 98.96907 | is_linked |
| 89.50349 | 98.94736 | is_linked |
| 99.44025 | 100.00000 | is_dupe |
| 90.78082 | 98.92473 | is_linked |
+-----------+----------------+-----------------+
I found what I was doing wrong. My schema definition did not correctly follow the order of columns in the input file. ProposedAction comes after SimpleMatchRate like below:
. . .Matched MatchRate SimpleMatchRate ProposedAction status
I modified the definition to the below and the issue is fixed:
customschema = ST([
SF("Matched", STR()),
SF("MatchRate", STR()),
SF("SimpleMatchRate", STR()),
SF("ProposedAction", STR()), # Now in the correct position as in the input file
SF("Status", STR())])

Reading a .txt file with colon (:) in spark 2.4

i am trying to read .txt file in Spark 2.4 and load it to dataframe.
FILE data looks like :-
under a single Manager there is many employee
Manager_21: Employee_575,Employee_2703,
Manager_11: Employee_454,Employee_158,
Manager_4: Employee_1545,Employee_1312
Code i have written in Scala Spark 2.4 :-
val df = spark.read
.format("csv")
.option("header", "true") //first line in file has headers
.option("mode", "DROPMALFORMED")
.load("D:/path/myfile.txt")
df.printSchema()
Unfortunately while printing schema it is visible all Employee under single Manager_21.
root
|-- Manager_21: servant_575: string (nullable = true)
|-- Employee_454: string (nullable = true)
|-- Employee_1312 string (nullable = true)
.......
...... etc
I am not sure if it is possible in spark scala....
Expected Output:
all employee of a manager in same column.
for ex: Manager 21 has 2 employee and all are in same column.
Or How can we see which all employee are under a particular manager.
Manager_21 |Manager_11 |Manager_4
Employee_575 |Employee_454 |Employee_1545
Employee_2703|Employee_158|Employee_1312
is it possible to do some other way..... please suggest
Thanks
Try using spark.read.text then using groupBy and .pivot to get the desired result.
Example:
val df=spark.read.text("<path>")
df.show(10,false)
//+--------------------------------------+
//|value |
//+--------------------------------------+
//|Manager_21: Employee_575,Employee_2703|
//|Manager_11: Employee_454,Employee_158 |
//|Manager_4: Employee_1545,Employee_1312|
//+--------------------------------------+
import org.apache.spark.sql.functions._
df.withColumn("mid",monotonically_increasing_id).
withColumn("col1",split(col("value"),":")(0)).
withColumn("col2",split(split(col("value"),":")(1),",")).
groupBy("mid").
pivot(col("col1")).
agg(min(col("col2"))).
select(max("Manager_11").alias("Manager_11"),max("Manager_21").alias("Manager_21") ,max("Manager_4").alias("Manager_4")).
selectExpr("explode(arrays_zip(Manager_11,Manager_21,Manager_4))").
select("col.*").
show()
//+-------------+-------------+--------------+
//| Manager_11| Manager_21| Manager_4|
//+-------------+-------------+--------------+
//| Employee_454| Employee_575| Employee_1545|
//| Employee_158|Employee_2703| Employee_1312|
//+-------------+-------------+--------------+
UPDATE:
val df=spark.read.text("<path>")
val df1=df.withColumn("mid",monotonically_increasing_id).
withColumn("col1",split(col("value"),":")(0)).
withColumn("col2",split(split(col("value"),":")(1),",")).
groupBy("mid").
pivot(col("col1")).
agg(min(col("col2"))).
select(max("Manager_11").alias("Manager_11"),max("Manager_21").alias("Manager_21") ,max("Manager_4").alias("Manager_4")).
selectExpr("explode(arrays_zip(Manager_11,Manager_21,Manager_4))")
//create temp table
df1.createOrReplaceTempView("tmp_table")
sql("select col.* from tmp_table").show(10,false)
//+-------------+-------------+--------------+
//|Manager_11 |Manager_21 |Manager_4 |
//+-------------+-------------+--------------+
//| Employee_454| Employee_575| Employee_1545|
//|Employee_158 |Employee_2703|Employee_1312 |
//+-------------+-------------+--------------+

Convert spark dataframe with string column to StructType column

I have a CSV file with a header as "message" and rows as
{"a":1,"b":"hello 1","c":"1234"}
{"a":2,"b":"hello 2","c":"2345"}
I want to convert them in different columns a,b,c.
I tried the following code:
df1 = spark.read.format("csv").option("header","true")
.option("delimiter","^")
.option("inferSchema","false")
.load("testing.csv")
But it is taking it as a string column.
df1.printScema() --> String
Your file is in json format, with the first line as "message".
The first line can be ignored using the option "DROPMALFORMED" while reading using Spark's DataFrameReader
file : json-test.txt
message
{"a":1,"b":"hello 1","c":"1234"}
{"a":2,"b":"hello 2","c":"2345"}
reading a json file by ignoring bad records [initial record]:
val jsondf = spark.read
.option("multiLine", false)
.option("mode", "DROPMALFORMED")
.json("files/file-reader-test/json-test.txt")
jsondf.show()
output:
+---+-------+----+
| a| b| c|
+---+-------+----+
| 1|hello 1|1234|
| 2|hello 2|2345|
+---+-------+----+
schema :
jsondf.printSchema()
root
|-- a: long (nullable = true)
|-- b: string (nullable = true)
|-- c: string (nullable = true)

Writing Spark SQL query on data without header or schema

I want to write a generic script that can run SQL queries on a file that doesn't have a header or pre-defined schema. For example, a file could look like:
Bob,32
Alice, 24
Jane,65
Doug,33
Peter,19
And the SQL query might be:
SELECT COUNT(DISTINCT ??)
FROM temp_table
WHERE ?? > 32
I am wondering what to put in the ??.
you can define 'custom schema' while reading like
val schema = StructType(
StructField("field1", StringType, true) ::
StructField("field2", IntegerType, true) :: Nil
)
val df = spark.read.format("csv")
.option("sep", ",")
.option("header", "false")
.schema(schema)
.load("examples/src/main/resources/people.csv")
also you can ignore the schema part that would end-up in default names (not-preferred)
val df = spark.read.format("csv")
.option("sep", ",")
.option("header", "false")
.load("examples/src/main/resources/people.csv")
+-----+-----+
| _c0| _c1|
+-----+-----+
| Bob| 32 |
| .. | ... |
+-----+-----+
with that you can fill the column names in your spark-sql.
It seems default schema has column names _c0, _c1 etc.
val df = spark.read.format("csv").load("test.txt")
scala> df.printSchema
root
|-- _c0: string (nullable = true)
|-- _c1: string (nullable = true)
In Spark 2.0,
df.createOrReplaceTempView("temp_table")
spark.sql("SELECT COUNT(DISTINCT _c1) FROM temp_table WHERE cast(_c1 as int) > 32")

pyspark-java.lang.IllegalStateException: Input row doesn't have expected number of values required by the schema

I'm running pyspark-sql code on Horton sandbox
18/08/11 17:02:22 INFO spark.SparkContext: Running Spark version 1.6.3
# code
from pyspark.sql import *
from pyspark.sql.types import *
rdd1 = sc.textFile ("/user/maria_dev/spark_data/products.csv")
rdd2 = rdd1.map( lambda x : x.split("," ) )
df1 = sqlContext.createDataFrame(rdd2, ["id","cat_id","name","desc","price", "url"])
df1.printSchema()
root
|-- id: string (nullable = true)
|-- cat_id: string (nullable = true)
|-- name: string (nullable = true)
|-- desc: string (nullable = true)
|-- price: string (nullable = true)
|-- url: string (nullable = true)
df1.show()
+---+------+--------------------+----+------+--------------------+
| id|cat_id| name|desc| price| url|
+---+------+--------------------+----+------+--------------------+
| 1| 2|Quest Q64 10 FT. ...| | 59.98|http://images.acm...|
| 2| 2|Under Armour Men'...| |129.99|http://images.acm...|
| 3| 2|Under Armour Men'...| | 89.99|http://images.acm...|
| 4| 2|Under Armour Men'...| | 89.99|http://images.acm...|
| 5| 2|Riddell Youth Rev...| |199.99|http://images.acm...|
# When I try to get counts I get the following error.
df1.count()
**Caused by: java.lang.IllegalStateException: Input row doesn't have expected number of values required by the schema. 6 fields are required while 7 values are provided.**
# I get the same error for the following code as well
df1.registerTempTable("products_tab")
df_query = sqlContext.sql ("select id, name, desc from products_tab order by name, id ").show();
I see column desc is null, not sure if null column needs to be handled differently when creating data frame and using any method on it.
The same error occurs when running sql query. It seems sql error is due to "order by" clause, if I remove order by then query runs successfully.
Please let me know if you need more info and appreciate answer on how to handle this error.
I tried to see if name field contains any comma, as suggested by Chandan Ray.
There's no comma in name field.
rdd1.count()
=> 1345
rdd2.count()
=> 1345
# clipping id and name column from rdd2
rdd_name = rdd2.map(lambda x: (x[0], x[2]) )
rdd_name.count()
=>1345
rdd_name_comma = rdd_name.filter (lambda x : True if x[1].find(",") != -1 else False )
rdd_name_comma.count()
==> 0
I found the issue- it was due to one bad record, where comma was embedded in string. And even though string was double quoted, python splits string into 2 columns.
I tried using databricks package
# from command prompt
pyspark --packages com.databricks:spark-csv_2.10:1.4.0
# on pyspark
schema1 = StructType ([ StructField("id",IntegerType(), True), \
StructField("cat_id",IntegerType(), True), \
StructField("name",StringType(), True),\
StructField("desc",StringType(), True),\
StructField("price",DecimalType(), True), \
StructField("url",StringType(), True)
])
df1 = sqlContext.read.format('com.databricks.spark.csv').schema(schema1).load('/user/maria_dev/spark_data/products.csv')
df1.show()
df1.show()
+---+------+--------------------+----+-----+--------------------+
| id|cat_id| name|desc|price| url|
+---+------+--------------------+----+-----+--------------------+
| 1| 2|Quest Q64 10 FT. ...| | 60|http://images.acm...|
| 2| 2|Under Armour Men'...| | 130|http://images.acm...|
| 3| 2|Under Armour Men'...| | 90|http://images.acm...|
| 4| 2|Under Armour Men'...| | 90|http://images.acm...|
| 5| 2|Riddell Youth Rev...| | 200|http://images.acm...|
df1.printSchema()
root
|-- id: integer (nullable = true)
|-- cat_id: integer (nullable = true)
|-- name: string (nullable = true)
|-- desc: string (nullable = true)
|-- price: decimal(10,0) (nullable = true)
|-- url: string (nullable = true)
df1.count()
1345
I suppose your name field has comma in it, so its splitting this also. So its expecting 7 columns
There might be some malformed lines.
Please try to use the code as below to exclude bad record in one file
val df = spark.read.format(“csv”).option("badRecordsPath", "/tmp/badRecordsPath").load(“csvpath”)
//it will read csv and create a dataframe, if there will be any malformed record it will move this into the path you provided.
// please read below
https://docs.databricks.com/spark/latest/spark-sql/handling-bad-records.html
Here is my take on cleaning of such records, we normally encounter such situations:
a. Anomaly on the data where the file when created, was not looked if "," is the best delimiter on the columns.
Here is my solution on the case:
Solution a: In such cases, we would like to have the process identify as part of data cleansing if that record is a qualified records. The rest of the records if routed to a bad file/collection would give the opportunity to reconcile such records.
Below is the structure of my dataset (product_id,product_name,unit_price)
1,product-1,10
2,product-2,20
3,product,3,30
In the above case, product,3 is supposed to be read as product-3 which might have been a typo when the product was registered. In such as case, the below sample would work.
>>> tf=open("C:/users/ip2134/pyspark_practice/test_file.txt")
>>> trec=tf.read().splitlines()
>>> for rec in trec:
... if rec.count(",") == 2:
... trec_clean.append(rec)
... else:
... trec_bad.append(rec)
...
>>> trec_clean
['1,product-1,10', '2,product-2,20']
>>> trec_bad
['3,product,3,30']
>>> trec
['1,product-1,10', '2,product-2,20','3,product,3,30']
The other alternative of dealing with this problem would be trying to see if skipinitialspace=True would work to parse out the columns.
(Ref:Python parse CSV ignoring comma with double-quotes)

Resources