Multiple nested if conditions inside lambda - python-3.x

I have a column called zipcode in the pandas data frame. Some the rows contain NaN values, some contain correct string format like '160 00' and the rest contain the wrong format like '18000'. What I want is to skip NaN values (not to drop them) and convert wrong zipcodes into correct ones; for example: '18000' -> '180 00'.
Is it possible to do that by applying lambda? All I got is this so far:
df['zipcode']apply(lambda row: print(row[:3] + ' ' + row[3:]) if type(row) == str else row)
Sample of dataframe:
df = pd.DataFrame(np.array(['11100', '246 00', '356 50',
np.nan, '18000', '156 00', '163 00']), columns=['zipcode'])
zipcode
0 11100
1 246 00
2 356 50
3 nan
4 18000
5 156 00
6 163 00
Thank you.

Let us try .str.replace:
df['zipcode'] = df['zipcode'].str.replace(r'(\d{3})\s*(\d+)', r'\g<1> \g<2>')
zipcode
0 111 00
1 246 00
2 356 50
3 nan
4 180 00
5 156 00
6 163 00

Related

How to check if a value in a column is found in a list in a column, with Spark SQL?

I have a delta table A as shown below.
point
cluster
points_in_cluster
37
1
[37,32]
45
2
[45,67,84]
67
2
[45,67,84]
84
2
[45,67,84]
32
1
[37,32]
Also I have a table B as shown below.
id
point
101
37
102
67
103
84
I want a query like the following. Here in obviously doesn't work for a list. So, what would be the right syntax?
select b.id, a.point
from A a, B b
where b.point in a.points_in_cluster
As a result I should have a table like the following
id
point
101
37
101
32
102
45
102
67
102
84
103
45
103
67
103
84
Based on your data sample, I'd do an equi-join on point column and then an explode on points_in_cluster :
from pyspark.sql import functions as F
# assuming A is df_A and B is df_B
df_A.join(
df_B,
on="point"
).select(
"id",
F.explode("points_in_cluster").alias("point")
)
Otherwise, you use array_contains:
select b.id, a.point
from A a, B b
where array_contains(a.points_in_cluster, b.point)

Appending DataFrame to empty DataFrame in {Key: Empty DataFrame (with columns)}

I am struggling to understand this one.
I have a regular df (same columns as the empty df in dict) and an empty df which is a value in a dictionary (the keys in the dict are variable based on certain inputs, so can be just one key/value pair or multiple key/value pairs - think this might be relevant). The dict structure is essentially:
{key: [[Empty DataFrame
Columns: [list of columns]
Index: []]]}
I am using the following code to try and add the data:
dict[key].append(df, ignore_index=True)
The error I get is:
temp_dict[product_match].append(regular_df, ignore_index=True)
TypeError: append() takes no keyword arguments
Is this error due to me mis-specifying the value I am attempting to append the df to (like am I trying to append the df to the key instead here) or something else?
Your dictionary contains a list of lists at the key, we can see this in the shown output:
{key: [[Empty DataFrame Columns: [list of columns] Index: []]]}
# ^^ list starts ^^ list ends
For this reason dict[key].append is calling list.append as mentioned by #nandoquintana.
To append to the DataFrame access the specific element in the list:
temp_dict[product_match][0][0].append(df, ignore_index=True)
Notice there is no inplace version of append. append always produces a new DataFrame:
Sample Program:
import numpy as np
import pandas as pd
temp_dict = {
'key': [[pd.DataFrame()]]
}
product_match = 'key'
np.random.seed(5)
df = pd.DataFrame(np.random.randint(0, 100, (5, 4)))
temp_dict[product_match][0][0].append(df, ignore_index=True)
print(temp_dict)
Output (temp_dict was not updated):
{'key': [[Empty DataFrame
Columns: []
Index: []]]}
The new DataFrame will need to be assigned to the correct location.
Either a new variable:
some_new_variable = temp_dict[product_match][0][0].append(df, ignore_index=True)
some_new_variable
0 1 2 3
0 99 78 61 16
1 73 8 62 27
2 30 80 7 76
3 15 53 80 27
4 44 77 75 65
Or back to the list:
temp_dict[product_match][0][0] = (
temp_dict[product_match][0][0].append(df, ignore_index=True)
)
temp_dict
{'key': [[ 0 1 2 3
0 99 78 61 16
1 73 8 62 27
2 30 80 7 76
3 15 53 80 27
4 44 77 75 65]]}
Assuming there the DataFrame is actually an empty DataFrame, append is unnecessary as simply updating the value at the key to be that DataFrame works:
temp_dict[product_match] = df
temp_dict
{'key': 0 1 2 3
0 99 78 61 16
1 73 8 62 27
2 30 80 7 76
3 15 53 80 27
4 44 77 75 65}
Or if list of list is needed:
temp_dict[product_match] = [[df]]
temp_dict
{'key': [[ 0 1 2 3
0 99 78 61 16
1 73 8 62 27
2 30 80 7 76
3 15 53 80 27
4 44 77 75 65]]}
Maybe you have an empty list at dict[key]?
Remember that "append" list method (unlike Pandas dataframe one) only receives one parameter:
https://docs.python.org/3/tutorial/datastructures.html#more-on-lists
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.append.html

Python: SettingWithCopyWarning when trying to set value to True based on condition

Data:
Date Stock Peak Trough Price
2002-01-01 33.78 False False 25
2002-01-02 34.19 False False 35
2002-01-03 35.44 False False 33
2002-01-04 36.75 False False 38
I use this line of code to set 'Peak' to true in each row whenever the price of a stock is higher or equal to the max value in the row starting from column 4:
df['Peak'] = np.where(df.iloc[:,4:].max(axis=1) >= df[stock], 'False', 'True')
However, I'm trying to make it so that the first X and last Y rows are not affected. Let's say X and Y are both 10 in this example. I modified it like this:
df.iloc[10:-10]['Peak'] = np.where(df.iloc[10:-10,4:].max(axis=1) >= df.iloc[10:-10][stock], 'False', 'True')
This gives me an error SettingWithCopyWarning and also doesn't work anymore. Does anyone have an idea how to get the desired result so that the first X and last Y rows are always False?
I believe you need a get_loc to specify column index when assigning using df.iloc[] :
df.iloc[10:,df.columns.get_loc('year')] = (np.where(df.iloc[10:,4:].max(axis=1)
>= df.iloc[10:,df.columns.get_loc('stock')],'False', 'True'))
To try here is a test case:
np.random.seed(123)
df = pd.DataFrame(np.random.randint(0,100,(5,4)),columns=list('ABCD'))
print(df)
A B C D
0 66 92 98 17
1 83 57 86 97
2 96 47 73 32
3 46 96 25 83
4 78 36 96 80
Trying to set column D as np.nan from index 2 we get the same error:
df.iloc[2:]['D']=np.nan
A value is trying to be set on a copy of a slice from a DataFrame. Try
using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation:
http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
Trying the same avoiding a chained assignment using get_loc (successful)
df.iloc[2:,df.columns.get_loc('D')] = np.nan
print(df)
A B C D
0 66 92 98 17.0
1 83 57 86 97.0
2 96 47 73 NaN
3 46 96 25 NaN
4 78 36 96 NaN

How to join a series into a dataframe

So I counted the frequency of a column 'address' from the dataframe 'df_two' and saved the data as dict. used that dict to create a series 'new_series'. so now I want to join this series into a dataframe making 'df_three' so that I can do some maths with the column 'new_count' and the column 'number' from 'new_series' and 'df_two' respectively.
I have tried to use merge / concat the items of 'new_count' were changed to NaN
Image for what i got(NaN)
df_three
number address name new_Count
14 12 ab pra NaN
49 03 cd ken NaN
97 07 ef dhi NaN
91 10 fg rav NaN
Image for input
Input
new_series
new_count
12 ab 8778
03 cd 6499
07 ef 5923
10 fg 5631
df_two
number address name
14 12 ab pra
49 03 cd ken
97 07 ef dhi
91 10 fg rav
output
df_three
number address name new_Count
14 12 ab pra 8778
49 03 cd ken 6499
97 07 ef dhi 5923
91 10 fg rav 5631
It seems you forget parameter on:
df = df_two.join(new_series, on='address')
print (df)
number address name new_count
0 14 12 ab pra 8778
1 49 03 cd ken 6499
2 97 07 ef dhi 5923
3 91 10 fg rav 5631

Remove index from dataframe using Python

I am trying to create a Pandas Dataframe from a string using the following code -
import pandas as pd
input_string="""A;B;C
0;34;88
2;45;200
3;47;65
4;32;140
"""
data = input_string
df = pd.DataFrame([x.split(';') for x in data.split('\n')])
print(df)
I am getting the following result -
0 1 2
0 A B C
1 0 34 88
2 2 45 200
3 3 47 65
4 4 32 140
5 None None
But I need something like the following -
A B C
0 34 88
2 45 200
3 47 65
4 32 140
I added "index = False" while creating the dataframe like -
df = pd.DataFrame([x.split(';') for x in data.split('\n')],index = False)
But, it gives me an error -
TypeError: Index(...) must be called with a collection of some kind, False
was passed
How is this achievable?
Use read_csv with StringIO and index_col parameetr for set first column to index:
input_string="""A;B;C
0;34;88
2;45;200
3;47;65
4;32;140
"""
df = pd.read_csv(pd.compat.StringIO(input_string),sep=';', index_col=0)
print (df)
B C
A
0 34 88
2 45 200
3 47 65
4 32 140
Your solution should be changed with split by default parameter (arbitrary whitespace), pass to DataFrame all values of lists without first with columns parameter and if need first column to index add DataFrame.set_axis:
L = [x.split(';') for x in input_string.split()]
df = pd.DataFrame(L[1:], columns=L[0]).set_index('A')
print (df)
B C
A
0 34 88
2 45 200
3 47 65
4 32 140
For general solution use first value of first list in set_index:
L = [x.split(';') for x in input_string.split()]
df = pd.DataFrame(L[1:], columns=L[0]).set_index(L[0][0])
EDIT:
You can set column name instead index name to A value:
df = df.rename_axis(df.index.name, axis=1).rename_axis(None)
print (df)
A B C
0 34 88
2 45 200
3 47 65
4 32 140
import pandas as pd
input_string="""A;B;C
0;34;88
2;45;200
3;47;65
4;32;140
"""
data = input_string
df = pd.DataFrame([x.split(';') for x in data.split()])
df.columns = df.iloc[0]
df = df.iloc[1:].rename_axis(None, axis=1)
df.set_index('A',inplace = True)
df
output
B C
A
0 34 88
2 45 200
3 47 65
4 32 140

Resources