Easy way to center a column in a Spark DataFrame - apache-spark

I want to center a column in a Spark DataFrame, i.e., subtract each element in the column by the mean of the column. Currently, I do it manually, i.e., first calculate the mean of a column, get the value out of the reduced DataFrame, and then subtract the column by the average. I wonder whether there is an easy way to do this in Spark? Any built-in function to do it?

There is no inbuilt function for this but you can use user defined function [ udf ] as below
import org.apache.spark.sql.DataFrame
val df = spark.sparkContext.parallelize(List(
(2.06,0.56),
(1.96,0.72),
(1.70,0.87),
(1.90,0.64))).toDF("c1","c2")
def subMean(mean: Double) = udf[Double, Double]((value: Double) => value - mean)
def getCenterDF(df: DataFrame, col: String): DataFrame = {
val avg = df.select(mean(col)).first().getAs[Double](0);
df.withColumn(col, subMean(avg)(df(col)))
}
scala> df.show(false)
+----+----+
|c1 |c2 |
+----+----+
|2.06|0.56|
|1.96|0.72|
|1.7 |0.87|
|1.9 |0.64|
+----+----+
scala> getCenterDF(df, "c2").show(false)
+----+--------------------+
|c1 |c2 |
+----+--------------------+
|2.06|-0.13750000000000007|
|1.96|0.022499999999999853|
|1.7 |0.17249999999999988 |
|1.9 |-0.05750000000000011|
+----+--------------------+

Related

Pyspark - Find sub-string from a column of data-frame with another data-frame

I have two different dataframes in Pyspark of String type. First dataframe is of single work while second is a string of words i.e., sentences. I have to check existence of first dataframe column from the second dataframe column. For example,
df2
+------+-------+-----------------+
|age|height| name| Sentences |
+---+------+-------+-----------------+
| 10| 80| Alice| 'Grace, Sarah'|
| 15| null| Bob| 'Sarah'|
| 12| null| Tom|'Amy, Sarah, Bob'|
| 13| null| Rachel| 'Tom, Bob'|
+---+------+-------+-----------------+
Second dataframe
df1
+-------+
| token |
+-------+
| 'Ali' |
|'Sarah'|
|'Bob' |
|'Bob' |
+-------+
So, how can I search for each token of df1 from df2 Sentence column. I need count for each word and add as a new column in df1
I have tried this solution, but work for a single word i.e., not for a complete column of dataframe
Considering the dataframe in the prev answer
from pyspark.sql.functions import explode,explode_outer,split, length,trim
df3 = df2.select('Sentences',explode(split('Sentences',',')).alias('friends'))
df3 = df3.withColumn("friends", trim("friends")).withColumn("length_of_friends", length("friends"))
display(df3)
df3 = df3.join(df1, df1.token == df3.friends,how='inner').groupby('friends').count()
display(df3)
You could use pyspark udf to create the new column in df1.
Problem is you cannot access a second dataframe inside udf (view here).
As advised in the referenced question, you could get sentences as broadcastable varaible.
Here is a working example :
from pyspark.sql.types import *
from pyspark.sql.functions import udf
# Instanciate df2
cols = ["age", "height", "name", "Sentences"]
data = [
(10, 80, "Alice", "Grace, Sarah"),
(15, None, "Bob", "Sarah"),
(12, None, "Tom", "Amy, Sarah, Bob"),
(13, None, "Rachel", "Tom, Bob")
]
df2 = spark.createDataFrame(data).toDF(*cols)
# Instanciate df1
cols = ["token"]
data = [
("Ali",),
("Sarah",),
("Bob",),
("Bob",)
]
df1 = spark.createDataFrame(data).toDF(*cols)
# Creating broadcast variable for Sentences column of df2
lstSentences = [data[0] for data in df2.select('Sentences').collect()]
sentences = spark.sparkContext.broadcast(lstSentences)
def countWordInSentence(word):
# Count if sentence contains word
return sum(1 for item in lstSentences if word in item)
func_udf = udf(countWordInSentence, IntegerType())
df1 = df1.withColumn("COUNT",
func_udf(df1["token"]))
df1.show()

Round all columns in dataframe - two decimal place pyspark

I have this command for all columns in my dataframe to round to 2 decimal places:
data = data.withColumn("columnName1", func.round(data["columnName1"], 2))
I have no idea how to round all Dataframe by the one command (not every column separate). Could somebody help me, please? I don't want to have the same command 50times with different column name.
There is not a function or command for applying all functions to the columns but you can iterate.
+-----+-----+
| col1| col2|
+-----+-----+
|1.111|2.222|
+-----+-----+
df = spark.read.option("header","true").option("inferSchema","true").csv("test.csv")
for c in df.columns:
df = df.withColumn(c, round(c, 2))
df.show()
+----+----+
|col1|col2|
+----+----+
|1.11|2.22|
+----+----+
To avoid converting non-FP columns:
import pyspark.sql.functions as F
for c_name, c_type in df.dtypes:
if c_type in ('double', 'float'):
df = df.withColumn(c_name, F.round(c_name, 2))

In Spark dataframe how to Transpose rows to columns?

this may be a very simple question. I want to transpose all the rows of dataframe to columns. I want to convert this df as shown below output DF. What are the ways in spark to achieve this?
Note : I have single column in input DF
import sparkSession.sqlContext.implicits._
val df = Seq(("row1"), ("row2"), ("row3"), ("row4"), ("row5")).toDF("COLUMN_NAME")
df.show(false)
Input DF:
+-----------+
|COLUMN_NAME|
+-----------+
|row1 |
|row2 |
|row3 |
|row4 |
|row5 |
+-----------+
Output DF
+----+----+----+----+----+
|row1|row2|row3|row4|row5|
+----+----+----+----+----+
Does this help you ?
df.withColumn("group",monotonicallyIncreasingId ).groupBy("group").pivot("COLUMN_NAME").agg(first("COLUMN_NAME")).show

Spark groupby, sort values, then take first and last

I'm using Apache Spark and have a dataframe that looks like this:
scala> df.printSchema
root
|-- id: string (nullable = true)
|-- epoch: long (nullable = true)
scala> df.show(10)
+--------------------+-------------+
| id | epoch|
+--------------------+-------------+
|6825a28d-abe5-4b9...|1533926790847|
|6825a28d-abe5-4b9...|1533926790847|
|6825a28d-abe5-4b9...|1533180241049|
|6825a28d-abe5-4b9...|1533926790847|
|6825a28d-abe5-4b9...|1532977853736|
|6825a28d-abe5-4b9...|1532531733106|
|1eb5f3a4-a68c-4af...|1535383198000|
|1eb5f3a4-a68c-4af...|1535129922000|
|1eb5f3a4-a68c-4af...|1534876240000|
|1eb5f3a4-a68c-4af...|1533840537000|
+--------------------+-------------+
only showing top 10 rows
I want to group by the id field to get all the epoch timestamps together for an id. I then want to sort the epochs by ascending timestamp and then take the first and last epochs.
I used the following query, but the first and last epoch values appear to be taken in the order that they appear in the original dataframe. I want the first and last to be taken from a sorted ascending order.
scala> val df2 = df2.groupBy("id").
agg(first("epoch").as("first"), last("epoch").as("last"))
scala> df2.show()
+--------------------+-------------+-------------+
| id| first| last|
+--------------------+-------------+-------------+
|4f433f46-37e8-412...|1535342400000|1531281600000|
|d0cba2f9-cc04-42c...|1535537741000|1530448494000|
|6825a28d-abe5-4b9...|1533926790847|1532531733106|
|e963f265-809c-425...|1534996800000|1534996800000|
|1eb5f3a4-a68c-4af...|1535383198000|1530985221000|
|2e65a033-85ed-4e4...|1535660873000|1530494913413|
|90b94bb0-740c-42c...|1533960000000|1531108800000|
+--------------------+-------------+-------------+
How do I retrieve the first and last from the epoch list sorted by ascending epoch?
first and last functions are meaningless when applied outside Window context. The value which is taken is purely arbitrary.
Instead you should
Use min / max functions if the logic conforms to basic ordering rules (alphanumeric for strings, arrays, and structs, numeric for numbers).
Strongly typed dataset with map -> groupByKey -> reduceGroups or groupByKey -> mapGroups otherwise.
You can just use min and max and cast the resulting columns to string. Here is one way to do it
import org.apache.spark.sql.functions._
val df = Seq(("6825a28d-abe5-4b9",1533926790847.0),
("6825a28d-abe5-4b9",1533926790847.0),
("6825a28d-abe5-4b9",1533180241049.0),
("6825a28d-abe5-4b9",1533926790847.0),
("6825a28d-abe5-4b9",1532977853736.0),
("6825a28d-abe5-4b9",1532531733106.0),
("1eb5f3a4-a68c-4af",1535383198000.0),
("1eb5f3a4-a68c-4af",1535129922000.0),
("1eb5f3a4-a68c-4af",1534876240000.0),
("1eb5f3a4-a68c-4af",1533840537000.0)).toDF("id","epoch").withColumn("epoch",($"epoch"/1000.0).cast("timestamp"))
+-----------------+--------------------+
| id| epoch|
+-----------------+--------------------+
|6825a28d-abe5-4b9|2018-08-10 18:46:...|
|6825a28d-abe5-4b9|2018-08-10 18:46:...|
|6825a28d-abe5-4b9|2018-08-02 03:24:...|
|6825a28d-abe5-4b9|2018-08-10 18:46:...|
|6825a28d-abe5-4b9|2018-07-30 19:10:...|
|6825a28d-abe5-4b9|2018-07-25 15:15:...|
|1eb5f3a4-a68c-4af| 2018-08-27 15:19:58|
|1eb5f3a4-a68c-4af| 2018-08-24 16:58:42|
|1eb5f3a4-a68c-4af| 2018-08-21 18:30:40|
|1eb5f3a4-a68c-4af| 2018-08-09 18:48:57|
+-----------------+--------------------+
val df1 = df.groupBy("id").agg(min($"epoch").cast("string").as("first"), max($"epoch").cast("string"). as("last"))
df1.show
+-----------------+--------------------+--------------------+
| id| first| last|
+-----------------+--------------------+--------------------+
|6825a28d-abe5-4b9|2018-07-25 15:15:...|2018-08-10 18:46:...|
|1eb5f3a4-a68c-4af| 2018-08-09 18:48:57| 2018-08-27 15:19:58|
+-----------------+--------------------+--------------------+
df1: org.apache.spark.sql.DataFrame = [id: string, first: string ... 1 more field]

Trim in a Pyspark Dataframe

I have a Pyspark dataframe(Original Dataframe) having below data(all columns have string datatype). In my use case i am not sure of what all columns are there in this input dataframe. User just pass me the name of dataframe and ask me to trim all the columns of this dataframe. Data in a typical dataframe looks like as below:
id Value Value1
1 "Text " "Avb"
2 1504 " Test"
3 1 2
Is there anyway i can do it without being dependent on what all columns are present in this dataframe and get all the column trimmed in this dataframe. Data after trimming aall the columns of dataframe should look like.
id Value Value1
1 "Text" "Avb"
2 1504 "Test"
3 1 2
Can someone help me out? How can i achieve it using Pyspark dataframe? Any help will be appreciated.
input:
df.show()
+---+-----+------+
| id|Value|Value1|
+---+-----+------+
| 1|Text | Avb|
| 2| 1504| Test|
| 3| 1| 2|
+---+-----+------+
Code:
import pyspark.sql.functions as func
for col in df.columns:
df = df.withColumn(col, func.ltrim(func.rtrim(df[col])))
Output:
df.show()
+---+-----+------+
| id|Value|Value1|
+---+-----+------+
| 1| Text| Avb|
| 2| 1504| Test|
| 3| 1| 2|
+---+-----+------+
Using trim() function in #osbon123's answer.
from pyspark.sql.functions import trim
for c_name in df.columns:
df = df.withColumn(c_name, trim(col(c_name)))
You should avoid using withColumn because it creates a new DataFrame which is time-consuming for very large dataframes. I created the following function based on this solution, but now it works with any dataframe even when it has string and non-string columns.
from pyspark.sql import functions as F
def trim_string_columns(of_data: DataFrame) -> DataFrame:
data_trimmed = of_data.select([
(F.trim(c.name).alias(c.name) if isinstance(c.dataType, StringType) else c.name) for c in of_data.schema
])
return data_trimmed
This is the cleanest (and most computationally efficient) way I've seen it done to trim all spaces in all columns. If you want underscores to replace spaces, simply replace "" with "_".
# Standardize Column names no spaces to underscore
new_column_name_list = list(map(lambda x: x.replace(" ", ""), df.columns))
df = df.toDF(*new_column_name_list)
You can use dtypes function in DataFrame API to get the list of Cloumn Names along with their Datatypes and then for all string columns use "trim" function to trim the values.
Regards,
Neeraj

Resources