Using pyspark. I would have a data frame like this
col1
col2
col3
1
[3,7]
5
hello
4
666
4
world
4
Now I want to get the column name where the number 666 is included.
So the result should be "col3".
Thanks
Edits
Added other values than int. The great answers are focussed on only int values. Sry.
deleted: while we are at it, I guess the index can also be retrieved easily.
(df.withColumn('result', F.array(*[F.array(F.lit(x).alias('y'), col(x).alias('y')) for x in df.columns]))#Create an array of cols and values
.withColumn('result', expr("transform(filter(result, (c,i)->(c[1]==666)),(c,i)->c[0])"))#Filter array with 666 and extract col
.show(truncate=False))
|col1|col2|col3|result|
+----+----+----+------+
|1 |3 |5 |[] |
|2 |4 |666 |[col3]|
|4 |6 |4 |[] |
here's an approach that creates an array using the columns and then filters it.
data_sdf. \
withColumn('allcols',
func.array(*[func.struct(func.lit(c).alias('name'), func.col(c).cast('string').alias('value'))
for c in data_sdf.columns]
)
). \
withColumn('cols_w_666_arr',
func.expr('transform(filter(allcols, x -> x.value = "666"), c -> c.name)')
). \
drop('allcols'). \
show(truncate=False)
# +---+---+---+--------------+
# |c1 |c2 |c3 |cols_w_666_arr|
# +---+---+---+--------------+
# |1 |3 |5 |[] |
# |2 |4 |666|[c3] |
# |4 |666|4 |[c2] |
# |666|666|4 |[c1, c2] |
# +---+---+---+--------------+
Related
This is what the dataframe looks like:
+---+-----------------------------------------+-----+
|eco|eco_name |count|
+---+-----------------------------------------+-----+
|B63|Sicilian, Richter-Rauzer Attack |5 |
|D86|Grunfeld, Exchange |3 |
|C99|Ruy Lopez, Closed, Chigorin, 12...cd |5 |
|A44|Old Benoni Defense |3 |
|C46|Three Knights |1 |
|C08|French, Tarrasch, Open, 4.ed ed |13 |
|E59|Nimzo-Indian, 4.e3, Main line |2 |
|A20|English |2 |
|B20|Sicilian |4 |
|B37|Sicilian, Accelerated Fianchetto |2 |
|A33|English, Symmetrical |8 |
|C77|Ruy Lopez |8 |
|B43|Sicilian, Kan, 5.Nc3 |10 |
|A04|Reti Opening |6 |
|A59|Benko Gambit |1 |
|A54|Old Indian, Ukrainian Variation, 4.Nf3 |3 |
|D30|Queen's Gambit Declined |19 |
|C01|French, Exchange |3 |
|D75|Neo-Grunfeld, 6.cd Nxd5, 7.O-O c5, 8.dxc5|1 |
|E74|King's Indian, Averbakh, 6...c5 |2 |
+---+-----------------------------------------+-----+
Schema:
root
|-- eco: string (nullable = true)
|-- eco_name: string (nullable = true)
|-- count: long (nullable = false)
I want to filter it so that only two rows with minimum and maximum counts remain.
The output dataframe should look something like:
+---+-----------------------------------------+--------------------+
|eco|eco_name |number_of_occurences|
+---+-----------------------------------------+--------------------+
|D30|Queen's Gambit Declined |19 |
|C46|Three Knights |1 |
+---+-----------------------------------------+--------------------+
I'm a beginner, I'm really sorry if this is a stupid question.
No need to apologize since this is the place to learn! One of the solutions is to use the Window and rank to find the min/max row:
df = spark.createDataFrame(
[('a', 1), ('b', 1), ('c', 2), ('d', 3)],
schema=['col1', 'col2']
)
df.show(10, False)
+----+----+
|col1|col2|
+----+----+
|a |1 |
|b |1 |
|c |2 |
|d |3 |
+----+----+
Just use filtering to find the min/max count row after the ranking:
df\
.withColumn('min_row', func.rank().over(Window.orderBy(func.asc('col2'))))\
.withColumn('max_row', func.rank().over(Window.orderBy(func.desc('col2'))))\
.filter((func.col('min_row') == 1) | (func.col('max_row') == 1))\
.show(100, False)
+----+----+-------+-------+
|col1|col2|min_row|max_row|
+----+----+-------+-------+
|d |3 |4 |1 |
|a |1 |1 |3 |
|b |1 |1 |3 |
+----+----+-------+-------+
Please note that if the min/max row count are the same, they will be both filtered out.
You can use row_number function twice to order records by count, ascending and descending.
SELECT eco, eco_name, count
FROM (SELECT *,
row_number() over (order by count asc) as rna,
row_number() over (order by count desc) as rnd
FROM df)
WHERE rna = 1 or rnd = 1;
Note there's a tie for count = 1. If you care about it add a secondary sort to control which record is selected or maybe use rank instead to select all.
I have a data range with dummy data and I want to make a query that returns only the headers when the sum of the columns is higher than 0. My first attempt has been to at least try to make a query that returns the columns for which Sum(Column)>0 by using this formula:
=query(A1:D,"Select A, B, C, D, WHERE SUM(A)>0 AND SUM(B)>0 AND SUM(C)>0 AND SUM(D)>0",0)
But I haven't had any luck. Here is a sample of a dummy table. I would very much appreciate any pointers in this matter.
| Dog| Cow | Cat|Horse|
|:---|:---:|:--:|----:|
| 1 | 0 |2 |3 |
| 2 | 0 |4 |6 |
| 3 | 0 |6 |9 |
| 4 | 0 |8 |12 |
A simple approach would be:
=query({A:D},"Select "&if(sum(A2:A)>0,"Col1",)&if(sum(B2:B)>0,",Col2",)&if(sum(C2:C)>0,",Col3",)&if(sum(D2:D)>0,",Col4",)&" ",1)
I am struggling with a (Py)Spark problem.
I have a column "col" in an ordered dataframe and need a way of adding up the elements from 0. What I need is the column "sum_from_0".
I tried it with window functions but did not succeed.
Any idea on how to solve this task would be appreciated.
Thank you in advance.
col sum_from_0
0 None
0 None
1 1
2 3
1 4
4 8
3 11
0 None
0 None
0 None
1 1
2 3
3 6
3 9
2 11
0 None
0 None
There is no ordering column, so I made it first and add some temp columns to separate sum groups. After that, sum over the group partition and order by id window such as
import org.apache.spark.sql.expressions.Window
val w1 = Window.orderBy("id")
val w2 = Window.partitionBy("group").orderBy("id")
df.withColumn("id", monotonically_increasing_id)
.withColumn("zero", (col("col") === 0).cast("int"))
.withColumn("group", sum("zero").over(w1))
.withColumn("sum_from_0", sum("col").over(w2))
.orderBy("id")
.drop("id", "group", "zero")
.show(20, false)
that gives the results:
+---+----------+
|col|sum_from_0|
+---+----------+
|0 |0 |
|0 |0 |
|1 |1 |
|2 |3 |
|1 |4 |
|4 |8 |
|3 |11 |
|0 |0 |
|0 |0 |
|0 |0 |
|1 |1 |
|2 |3 |
|3 |6 |
|3 |9 |
|2 |11 |
|0 |0 |
|0 |0 |
+---+----------+
I have an input dataframe of the format
+---------------------------------+
|name| values |score |row_number|
+---------------------------------+
|A |1000 |0 |1 |
|B |947 |0 |2 |
|C |923 |1 |3 |
|D |900 |2 |4 |
|E |850 |3 |5 |
|F |800 |1 |6 |
+---------------------------------+
I need to get sum(values) when score > 0 and row_number < K (i,e) SUM of all values when score > 0 for the top k values in the dataframe.
I am able to achieve this by running the following query for top 100 values
val top_100_data = df.select(
count(when(col("score") > 0 and col("row_number")<=100, col("values"))).alias("count_100"),
sum(when(col("score") > 0 and col("row_number")<=100, col("values"))).alias("sum_filtered_100"),
sum(when(col("row_number") <=100, col(values))).alias("total_sum_100")
)
However, I need to fetch data for top 100,200,300......2500. meaning I would need to run this query 25 times and finally union 25 dataframes.
I'm new to spark and still figuring lots of things out. What would be the best approach to solve this problem?
Thanks!!
You can create an Array of limits as
val topFilters = Array(100, 200, 300) // you can add more
Then you can loop through the topFilters array and create the dataframe you require. I suggest you to use join rather than union as join will give you separate columns and unions will give you separate rows. You can do the following
Given your dataframe as
+----+------+-----+----------+
|name|values|score|row_number|
+----+------+-----+----------+
|A |1000 |0 |1 |
|B |947 |0 |2 |
|C |923 |1 |3 |
|D |900 |2 |200 |
|E |850 |3 |150 |
|F |800 |1 |250 |
+----+------+-----+----------+
You can do by using the topFilters array defined above as
import sqlContext.implicits._
import org.apache.spark.sql.functions._
var finalDF : DataFrame = Seq("1").toDF("rowNum")
for(k <- topFilters) {
val top_100_data = df.select(lit("1").as("rowNum"), sum(when(col("score") > 0 && col("row_number") < k, col("values"))).alias(s"total_sum_$k"))
finalDF = finalDF.join(top_100_data, Seq("rowNum"))
}
finalDF.show(false)
Which should give you final dataframe as
+------+-------------+-------------+-------------+
|rowNum|total_sum_100|total_sum_200|total_sum_300|
+------+-------------+-------------+-------------+
|1 |923 |1773 |3473 |
+------+-------------+-------------+-------------+
You can do the same for your 25 limits that you have.
If you intend to use union, then the idea is similar to above.
I hope the answer is helpful
Updated
If you require union then you can apply following logic with the same limit array defined above
var finalDF : DataFrame = Seq((0, 0, 0, 0)).toDF("limit", "count", "sum_filtered", "total_sum")
for(k <- topFilters) {
val top_100_data = df.select(lit(k).as("limit"), count(when(col("score") > 0 and col("row_number")<=k, col("values"))).alias("count"),
sum(when(col("score") > 0 and col("row_number")<=k, col("values"))).alias("sum_filtered"),
sum(when(col("row_number") <=k, col("values"))).alias("total_sum"))
finalDF = finalDF.union(top_100_data)
}
finalDF.filter(col("limit") =!= 0).show(false)
which should give you
+-----+-----+------------+---------+
|limit|count|sum_filtered|total_sum|
+-----+-----+------------+---------+
|100 |1 |923 |2870 |
|200 |3 |2673 |4620 |
|300 |4 |3473 |5420 |
+-----+-----+------------+---------+
I have a pyspark data frame whih has a column containing strings. I want to split this column into words
Code:
>>> sentenceData = sqlContext.read.load('file://sample1.csv', format='com.databricks.spark.csv', header='true', inferSchema='true')
>>> sentenceData.show(truncate=False)
+---+---------------------------+
|key|desc |
+---+---------------------------+
|1 |Virat is good batsman |
|2 |sachin was good |
|3 |but modi sucks big big time|
|4 |I love the formulas |
+---+---------------------------+
Expected Output
---------------
>>> sentenceData.show(truncate=False)
+---+-------------------------------------+
|key|desc |
+---+-------------------------------------+
|1 |[Virat,is,good,batsman] |
|2 |[sachin,was,good] |
|3 |.... |
|4 |... |
+---+-------------------------------------+
How can I achieve this?
Use split function:
from pyspark.sql.functions import split
df.withColumn("desc", split("desc", "\s+"))