How to efficiently concatenate data frames with different column sets in Spark? - apache-spark

I have two tables with different but overlaping column sets. I want to concatenate them in a way that pandas does but it is very inefficient in spark.
X:
A B
0 1 3
1 2 4
Y:
A C
0 5 7
1 6 8
pd.concat(X, Y):
A B C
0 1 3 NaN
1 2 4 NaN
0 5 NaN 7
1 6 NaN 8
I tried to use Spark SQL to do it...
select A, B, null as C from X union all select A, null as B, C from Y
... and it is extremely slow. I applied this query to two tables with sizes: (79 rows, 17330 columns) and (92 rows, 16 columns). It took 129s running on Spark 1.62, 319s on Spark 2.01 and 1.2s on pandas.
Why is it so slow? Is this some kind of bug? Can it be done faster using spark?
EDIT:
I tried to do it programatically as in here: how to union 2 spark dataframes with different amounts of columns - it's even slower.
It seems that the problem is adding null columns maybe it can be solved somehow differently or this part could be made faster?

Related

Compare two dataframes and export unmatched data using pandas or other packages?

I have two dataframes and one is a subset of another one (picture below). I am not sure whether pandas can compare two dataframes and filter the data which is not in the subset and export it as a dataframe. Or is there any package doing this kind of task?
The subset dataframe was generated from RandomUnderSampler but the RandomUnderSampler did not have function which exports the unselected data. Any comments are welcome.
Use drop_duplicates with keep=False parameter:
Example:
>>> df1
A B
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
>>> df2
A B
0 0 1
1 2 3
2 6 7
>>> pd.concat([df1, df2]).drop_duplicates(keep=False)
A B
2 4 5
4 8 9

Why DataFrame is not being sorted? [duplicate]

I'm new to pandas and working with tabular data in a programming environment. I have sorted a dataframe by a specific column but the answer that panda spits out is not exactly correct.
Here is the code I have used:
league_dataframe.sort_values('overall_league_position')
The result that the sort method yields values in column 'overall league position' are not sorted in ascending or order which is the default for the method.
What am I doing wrong? Thanks for your patience!
For whatever reason, you seem to be working with a column of strings, and sort_values is returning you a lexsorted result.
Here's an example.
df = pd.DataFrame({"Col": ['1', '2', '3', '10', '20', '19']})
df
Col
0 1
1 2
2 3
3 10
4 20
5 19
df.sort_values('Col')
Col
0 1
3 10
5 19
1 2
4 20
2 3
The remedy is to convert it to numeric, either using .astype or pd.to_numeric.
df.Col = df.Col.astype(float)
Or,
df.Col = pd.to_numeric(df.Col, errors='coerce')
df.sort_values('Col')
Col
0 1
1 2
2 3
3 10
5 19
4 20
The only difference b/w astype and pd.to_numeric is that the latter is more robust at handling non-numeric strings (they're coerced to NaN), and will attempt to preserve integers if a coercion to float is not necessary (as is seen in this case).

Combining the respective columns from 2 separate DataFrames using pandas

I have 2 large DataFrames with the same set of columns but different values. I need to combine the values in respective columns (A and B here, maybe be more in actual data) into single values in the same columns (see required output below). I have a quick way of implementing this using np.vectorize and df.to_numpy() but I am looking for a way to implement this strictly with pandas. Criteria here is first readability of code then time complexity.
df1 = pd.DataFrame({'A':[1,2,3,4,5], 'B':[5,4,3,2,1]})
print(df1)
A B
0 1 5
1 2 4
2 3 3
3 4 2
4 5 1
and,
df2 = pd.DataFrame({'A':[10,20,30,40,50], 'B':[50,40,30,20,10]})
print(df2)
A B
0 10 50
1 20 40
2 30 30
3 40 20
4 50 10
I have one way of doing it which is quite fast -
#This function might change into something more complex
def conc(a,b):
return str(a)+'_'+str(b)
conc_v = np.vectorize(conc)
required = pd.DataFrame(conc_v(df1.to_numpy(), df2.to_numpy()), columns=df1.columns)
print(required)
#Required Output
A B
0 1_10 5_50
1 2_20 4_40
2 3_30 3_30
3 4_40 2_20
4 5_50 1_10
Looking for an alternate way (strictly pandas) of solving this.
Criteria here is first readability of code
Another simple way is using add and radd
df1.astype(str).add(df2.astype(str).radd('-'))
A B
0 1-10 5-50
1 2-20 4-40
2 3-30 3-30
3 4-40 2-20
4 5-50 1-10

How to select mutiple rows at a time in pandas?

When I have a DataFrame object and an unknown number of rows, I want to select 5 rows each time.
For instance, df has 11 rows , it will be selected 3 times, 5+5+1, and if the rows is 4, only one time will be selected.
How can I write the code using pandas?
Use groupby with a little arithmetic. This should be clean.
chunks = [g for _, g in df.groupby(df.index // 5)]
Depending on how you want your output structured, you may change g to g.values.tolist() (if you want a list instead).
numpy.split
np.split(df, np.arange(5, len(df), 5))
Demo
df = pd.DataFrame(dict(A=range(11)))
print(*np.split(df, np.arange(5, len(df), 5)), sep='\n\n')
A
0 0
1 1
2 2
3 3
4 4
A
5 5
6 6
7 7
8 8
9 9
A
10 10
Create a loop and then use the index for indexing the DataFrame:
for i in range(len(df), 5):
data = df.iloc[i*5:(i+1)*5]

Apache Spark group sums by field

I have the dataframe with three columns
amount type id
12 A 1
10 C 1
21 B 2
10 A 2
2 B 3
44 B 3
I need to sum amounts of each type and group them by id. My solution is like
GroupedData result = dataFrame.agg(
when(dataFrame.col("type").like("A%")
.or(dataFrame.col("type").like("C%")),
sum("amount"))
.otherwise(0)
).agg(
when(dataFrame.col("type").like("B%"), sum("amount"))
.otherwise(0)
)
.groupBy(dataFrame.col("id"));
which isn't looks right for me. And I need to return DataFrame as a result with data
amount type id
22 A or C 1
21 B 2
10 A 2
46 B 3
I cannot use double groupBy because two different types may be in one sum. What can you suggest?
I use java and Apache Spark 1.6.2.
Why don't you groupBy by two columns?
df.groupBy($"id", $"type").sum()

Resources