I'm using spark 2.3
I have a DataFrame like this (in other situation _c0 may contains 20 inner fields):
_c0 | _c1
-----------------------------
1.1 1.2 4.55 | a
4.44 3.1 9.99 | b
1.2 99.88 10.1 | x
I want to split _c0, and create new DataFrame like this:
col1 |col2 |col3 |col4
-----------------------------
1.1 |1.2 |4.55 | a
4.44 |3.1 |9.99 | b
1.2 |99.88 |10.1 | x
I know how to solve this using getItem():
df = originalDf.rdd.map(lambda x: (re.split(" +",x[0]),x[1])).toDF()
# now, df[0] is a array of string , and df[1] is string
df = df.select(df[0].getItem(0), df[0].getItem(1), df[0].getItem(2), df[1])
But I hoped to find a different way to solve this, because _c0 may contain more than 3 inner column.
Is there a way to use flatMap to generate the df?
Is there a way to insert df[1] as inner field of df[0]?
Is there a way to use df[0].getItem(), so it returns all inner fields?
Is there a simpler way to generate the data-frame?
Any help will be appreciated
Thanks
Use df split function and regex pattern for whitespaces ("\\s+").
Docs: https://spark.apache.org/docs/2.3.1/api/python/_modules/pyspark/sql/functions.html
def split(str, pattern):
"""
Splits str around pattern (pattern is a regular expression).
.. note:: pattern is a string represent the regular expression.
>>> df = spark.createDataFrame([('ab12cd',)], ['s',])
>>> df.select(split(df.s, '[0-9]+').alias('s')).collect()
[Row(s=[u'ab', u'cd'])]
"""
sc = SparkContext._active_spark_context
return Column(sc._jvm.functions.split(_to_java_column(str), pattern))
Then you can use getItem on array col to get particular field value.
Related
I have 2 Data Frames with matched and unmatched column names, I want to compare the column names of the both the frames and print a table/dataframe with unmatched column names.
Please someone help me on this
I have no idea how can i achieve this
Below is the expectation
DF1:
DF2:
Output:
Output should actual vs unmatched column name
Update:
As per Expected output in Questions. The requirement is to compare both dataframes with similar schema but different names and make a dataframe of mismatched column names.
Thus, my best bet would be:
df3 = spark.createDataFrame([Row(idx,x) for idx,x in enumerate(df1.schema.names) if x not in df2.schema.names]).toDF("#","Uncommon Columns From DF1")\
.join(spark.createDataFrame([Row(idx,x) for idx, x in enumerate(df2.schema.names) if x not in df1.schema.names]).toDF("#","Uncommon Columns From DF2"),"#")
The catch here is, the schema should be similar as it compares column names based on "ordinals" i.e. their respective positions in the schema.
Input/Output
Change the join type to "full_outer" in case there are extra columns in either dataframe.
df3 = spark.createDataFrame([Row(idx,x) for idx,x in enumerate(df1.schema.names) if x not in df2.schema.names]).toDF("#","Uncommon Columns From DF1").join(spark.createDataFrame([Row(idx,x) for idx, x in enumerate(df2.schema.names) if x not in df1.schema.names]).toDF("#","Uncommon Columns From DF2"),"#", "full_outer")
Input/Output
You can easily do this by using sets operations
Data Preparation
s1 = StringIO("""
firstName,lastName,age,city,country
Alex,Smith,19,SF,USA
Rick,Mart,18,London,UK
""")
df1 = pd.read_csv(s1,delimiter=',')
sparkDF1 = sql.createDataFrame(df1)
s2 = StringIO("""
firstName,lastName,age
Alex,Smith,21
""")
df2 = pd.read_csv(s2,delimiter=',')
sparkDF2 = sql.createDataFrame(df2)
sparkDF1.show()
+---------+--------+---+------+-------+
|firstName|lastName|age| city|country|
+---------+--------+---+------+-------+
| Alex| Smith| 19| SF| USA|
| Rick| Mart| 18|London| UK|
+---------+--------+---+------+-------+
sparkDF2.show()
+---------+--------+---+
|firstName|lastName|age|
+---------+--------+---+
| Alex| Smith| 21|
+---------+--------+---+
Columns - Intersections & Difference
common = set(sparkDF1.columns) & set(sparkDF2.columns)
diff = set(sparkDF1.columns) - set(sparkDF2.columns)
print("Common - ",common)
## Common - {'lastName', 'age', 'firstName'}
print("Difference - ",diff)
## Difference - {'city', 'country'}
Additionally you can create tables/dataframes using the above variable values
I have a dataframe like this:
inputRecordSetCount
inputRecordCount
suspenseRecordCount
166
1216
10
I am trying to make it look like
operation
value
inputRecordSetCount
166
inputRecordCount
1216
suspenseRecordCount
10
I tried pivot, but it needs a groupBy field. I dont have any groupBy field. I found some reference of Stack in Scala. But not sure, how to use it in PySpark. Any help would be appreciated. Thank you.
You can use the stack() operation as mentioned in this tutorial.
Since there are 3 unique values, pass the size, and pair of label and column name:
stack(3, "inputRecordSetCount", inputRecordSetCount, "inputRecordCount", inputRecordCount, "suspenseRecordCount", suspenseRecordCount) as (operation, value)
Full example:
df = spark.createDataFrame(data=[[166,1216,10]], schema=['inputRecordSetCount','inputRecordCount','suspenseRecordCount'])
cols = [f'"{c}", {c}' for c in df.columns]
exprs = f"stack({len(cols)}, {', '.join(str(c) for c in cols)}) as (operation, value)"
df = df.selectExpr(exprs)
df.show()
+-------------------+-----+
| operation|value|
+-------------------+-----+
|inputRecordSetCount| 166|
| inputRecordCount| 1216|
|suspenseRecordCount| 10|
+-------------------+-----+
I am using pyspark to read a parquet file like below:
my_df = sqlContext.read.parquet('hdfs://myPath/myDB.db/myTable/**')
Then when I do my_df.take(5), it will show [Row(...)], instead of a table format like when we use the pandas data frame.
Is it possible to display the data frame in a table format like pandas data frame? Thanks!
The show method does what you're looking for.
For example, given the following dataframe of 3 rows, I can print just the first two rows like this:
df = sqlContext.createDataFrame([("foo", 1), ("bar", 2), ("baz", 3)], ('k', 'v'))
df.show(n=2)
which yields:
+---+---+
| k| v|
+---+---+
|foo| 1|
|bar| 2|
+---+---+
only showing top 2 rows
As mentioned by #Brent in the comment of #maxymoo's answer, you can try
df.limit(10).toPandas()
to get a prettier table in Jupyter. But this can take some time to run if you are not caching the spark dataframe. Also, .limit() will not keep the order of original spark dataframe.
Let's say we have the following Spark DataFrame:
df = sqlContext.createDataFrame(
[
(1, "Mark", "Brown"),
(2, "Tom", "Anderson"),
(3, "Joshua", "Peterson")
],
('id', 'firstName', 'lastName')
)
There are typically three different ways you can use to print the content of the dataframe:
Print Spark DataFrame
The most common way is to use show() function:
>>> df.show()
+---+---------+--------+
| id|firstName|lastName|
+---+---------+--------+
| 1| Mark| Brown|
| 2| Tom|Anderson|
| 3| Joshua|Peterson|
+---+---------+--------+
Print Spark DataFrame vertically
Say that you have a fairly large number of columns and your dataframe doesn't fit in the screen. You can print the rows vertically - For example, the following command will print the top two rows, vertically, without any truncation.
>>> df.show(n=2, truncate=False, vertical=True)
-RECORD 0-------------
id | 1
firstName | Mark
lastName | Brown
-RECORD 1-------------
id | 2
firstName | Tom
lastName | Anderson
only showing top 2 rows
Convert to Pandas and print Pandas DataFrame
Alternatively, you can convert your Spark DataFrame into a Pandas DataFrame using .toPandas() and finally print() it.
>>> df_pd = df.toPandas()
>>> print(df_pd)
id firstName lastName
0 1 Mark Brown
1 2 Tom Anderson
2 3 Joshua Peterson
Note that this is not recommended when you have to deal with fairly large dataframes, as Pandas needs to load all the data into memory. If this is the case, the following configuration will help when converting a large spark dataframe to a pandas one:
spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")
For more details you can refer to my blog post Speeding up the conversion between PySpark and Pandas DataFrames
Yes: call the toPandas method on your dataframe and you'll get an actual pandas dataframe !
By default show() function prints 20 records of DataFrame. You can define number of rows you want to print by providing argument to show() function. You never know, what will be the total number of rows DataFrame will have. So, we can pass df.count() as argument to show function, which will print all records of DataFrame.
df.show() --> prints 20 records by default
df.show(30) --> prints 30 records according to argument
df.show(df.count()) --> get total row count and pass it as argument to show
If you are using Jupyter, this is what worked for me:
[1]
df= spark.read.parquet("s3://df/*")
[2]
dsp = users
[3]
%%display
dsp
This shows well-formated HTML table, you can also draw some simple charts on it straight away. For more documentation of %%display, type %%help.
Maybe something like this is a tad more elegant:
df.display()
# OR
df.select('column1').display()
I'd like to build a pyspark ML model from data stored in a hive table. The data looks like this:
ID | value
---+------
1 | 100
1 | 101
1 | 102
2 | 101
2 | 103
Using pure hive, I could optionally use collect_set to collapse the values into hive arrays producing something like this:
ID | value
---+-----------
1 | (100, 101, 102)
2 | (101, 103)
The values are categorical features. For this particular use case I'm fine to consider them as indices to a sparse vector of 1, but it'd be nice to have a solution for general categoricals a la StringIndexer(). What I'd like to do is to gather the values into a feature vector which I could then feed to one of the classifiers.
I tried using a UDF to convert the arrays into VectorUDT and then featurize with VectorIndexer(), but when I tried this it complained that all vectors are not the same length.
What's the proper way to gather these?
Nothing stops you from using collect_set in Spark SQL as well. It is just quite expensive. If you don't mind that all you need is just a bunch of imports:
from pyspark.sql.functions.import collect_set, udf, col
from pyspark.ml.linag import SparseVector
n = df.max("value").first[0] + 1
to_vector = udf(lambda xs: SparseVector(n, {x: 1.0 for x in xs})
(df
.groupBy("id", collect_set(col("value")).alias("values"))
.select("id", to_vector(col("values")).alias("features")))
I have a very big pyspark.sql.dataframe.DataFrame named df.
I need some way of enumerating records- thus, being able to access record with certain index. (or select group of records with indexes range)
In pandas, I could make just
indexes=[2,3,6,7]
df[indexes]
Here I want something similar, (and without converting dataframe to pandas)
The closest I can get to is:
Enumerating all the objects in the original dataframe by:
indexes=np.arange(df.count())
df_indexed=df.withColumn('index', indexes)
Searching for values I need using where() function.
QUESTIONS:
Why it doesn't work and how to make it working? How to add a row to a dataframe?
Would it work later to make something like:
indexes=[2,3,6,7]
df1.where("index in indexes").collect()
Any faster and simpler way to deal with it?
It doesn't work because:
the second argument for withColumn should be a Column not a collection. np.array won't work here
when you pass "index in indexes" as a SQL expression to where indexes is out of scope and it is not resolved as a valid identifier
PySpark >= 1.4.0
You can add row numbers using respective window function and query using Column.isin method or properly formated query string:
from pyspark.sql.functions import col, rowNumber
from pyspark.sql.window import Window
w = Window.orderBy()
indexed = df.withColumn("index", rowNumber().over(w))
# Using DSL
indexed.where(col("index").isin(set(indexes)))
# Using SQL expression
indexed.where("index in ({0})".format(",".join(str(x) for x in indexes)))
It looks like window functions called without PARTITION BY clause move all data to the single partition so above may be not the best solution after all.
Any faster and simpler way to deal with it?
Not really. Spark DataFrames don't support random row access.
PairedRDD can be accessed using lookup method which is relatively fast if data is partitioned using HashPartitioner. There is also indexed-rdd project which supports efficient lookups.
Edit:
Independent of PySpark version you can try something like this:
from pyspark.sql import Row
from pyspark.sql.types import StructType, StructField, LongType
row = Row("char")
row_with_index = Row("char", "index")
df = sc.parallelize(row(chr(x)) for x in range(97, 112)).toDF()
df.show(5)
## +----+
## |char|
## +----+
## | a|
## | b|
## | c|
## | d|
## | e|
## +----+
## only showing top 5 rows
# This part is not tested but should work and save some work later
schema = StructType(
df.schema.fields[:] + [StructField("index", LongType(), False)])
indexed = (df.rdd # Extract rdd
.zipWithIndex() # Add index
.map(lambda ri: row_with_index(*list(ri[0]) + [ri[1]])) # Map to rows
.toDF(schema)) # It will work without schema but will be more expensive
# inSet in Spark < 1.3
indexed.where(col("index").isin(indexes))
If you want a number range that's guaranteed not to collide but does not require a .over(partitionBy()) then you can use monotonicallyIncreasingId().
from pyspark.sql.functions import monotonicallyIncreasingId
df.select(monotonicallyIncreasingId().alias("rowId"),"*")
Note though that the values are not particularly "neat". Each partition is given a value range and the output will not be contiguous. E.g. 0, 1, 2, 8589934592, 8589934593, 8589934594.
This was added to Spark on Apr 28, 2015 here: https://github.com/apache/spark/commit/d94cd1a733d5715792e6c4eac87f0d5c81aebbe2
from pyspark.sql.functions import monotonically_increasing_id
df.withColumn("Atr4", monotonically_increasing_id())
If you only need incremental values (like an ID) and if there is no
constraint that the numbers need to be consecutive, you could use
monotonically_increasing_id(). The only guarantee when using this
function is that the values will be increasing for each row, however,
the values themself can differ each execution.
You certainly can add an array for indexing, an array of your choice indeed:
In Scala, first we need to create an indexing Array:
val index_array=(1 to df.count.toInt).toArray
index_array: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
You can now append this column to your DF. First, For that, you need to open up our DF and get it as an array, then zip it with your index_array and then we convert the new array back into and RDD. The final step is to get it as a DF:
final_df = sc.parallelize((df.collect.map(
x=>(x(0),x(1))) zip index_array).map(
x=>(x._1._1.toString,x._1._2.toString,x._2))).
toDF("column_name")
The indexing would be more clear after that.
monotonicallyIncreasingId() - this will assign row numbers in incresing order but not in sequence.
sample output with 2 columns:
|---------------------|------------------|
| RowNo | Heading 2 |
|---------------------|------------------|
| 1 | xy |
|---------------------|------------------|
| 12 | xz |
|---------------------|------------------|
If you want assign row numbers use following trick.
Tested in spark-2.0.1 and greater versions.
df.createOrReplaceTempView("df")
dfRowId = spark.sql("select *, row_number() over (partition by 0) as rowNo from df")
sample output with 2 columns:
|---------------------|------------------|
| RowNo | Heading 2 |
|---------------------|------------------|
| 1 | xy |
|---------------------|------------------|
| 2 | xz |
|---------------------|------------------|
Hope this helps.
Selecting a single row n of a Pyspark DataFrame, try:
df.where(df.id == n).show()
Given a Pyspark DataFrame:
df = spark.createDataFrame([(1, 143.5, 5.6, 28, 'M', 100000),\
(2, 167.2, 5.4, 45, 'M', None),\
(3, None , 5.2, None, None, None),\
], ['id', 'weight', 'height', 'age', 'gender', 'income'])
Selecting the 3rd row, try:
df.where('id == 3').show()
Or:
df.where(df.id == 3).show()
Selecting multiple rows with rows' ids (the 2nd & the 3rd rows in this case), try:
id = {"2", "3"}
df.where(df.id.isin(id)).show()