find columns that has only specific value in pyspark - apache-spark

I have 600 columns that some of them has only value =1 in some records ,this column represent alarm when get the value =1 otherwise value=null that we don't have alarm , I need to find this column in spark dataframe. I test this code but I can't find true output.
df.select([column for column in df.columns if df(column).isin(1)]).show()
part of my dataset

cols_to_filter = df.agg(*[F.max(F.col(c) == 1).alias(c) for c in df.columns]).collect()[0].asDict()
filtered_df = df[[c for c, b in cols_to_filter.items() if b]]
To explain
# condition to check for in columns
cond = lambda c:F.col(c) == 1
# apply condition on all columns and max will return True if it's true at least once.
#This is then collected to a python dict to be programatically used in the format {col name: True/False}
cols_to_filter = df.agg(*[F.max(cond(c)).alias(c) for c in df.columns]).collect()[0].asDict()
#now use that dict to filter the columns where the condition was True
filtered_df = df[[c for c, b in cols_to_filter.items() if b]]

Related

Increase the values in a column values based on values in other column in pandas

I have my source data in the form of csv file as below:
id,col1,col2
123,11|22|33||||||,val1|val3|val2
456,99||77|||88|||||||||6|,val4|val5|val6|val7
I need to add a new column(fnlsrc) which will have the values based on values in Col2 and Col1, i.e if col1 has 9 values(separated with pipe) and col2 has 3 values(separated with pipe), then in fnlsrc column I have to load 9 values(separated with pipe) 3 set of col2(val1|val3|val2|val1|val3|val2|val1|val3|val2). Please refer the output below, which will help in understanding the requirement easily:
id,col1,col2,fnlsrc
123,11|22|33||||||,val1|val3|val2,val1|val3|val2|val1|val3|val2|val1|val3|val2
456,99||77|||88|||||||||6|,val4|val5|val6|val7,val4|val5|val6|val7|val4|val5|val6|val7|val4|val5|val6|val7|val4|val5|val6|val7
I have tried following code, but its adding only the one set:
zipped = zip(df['col1'], df['col2'])
for s,t in zipped:
count = int((s.count('|') + 1)/(t.count('|') + 1))
for val in range(count):
df['fnlsrc'] = t
As the new column is based on the other two, I would use panda's apply() function. I defined a function that calculates the new column value based on the other two columns, which is then applied to each row:
def new_value(x):
# Find out number of values in both columns
col1_numbers = x['col1'].count('|') + 1
col2_numbers = x['col2'].count('|') + 1
# Calculate how many times col2 should appear in the new column
repetition = int(col1_numbers/col2_numbers)
# Create list of strings containing the values of the new column
values = [x['col2']]*repetition
# Join the list of strings with pipes
return '|'.join(values)
# Apply the function on every row
df['fnlsrc'] = df.apply(lambda x:new_value(x), axis=1)
df
Output:
id col1 col2 fnlsrc
0 123 11|22|33|||||| val1|val3|val2 val1|val3|val2|val1|val3|val2|val1|val3|val2
1 456 99||77|||88|||||||||6| val4|val5|val6|val7 val4|val5|val6|val7|val4|val5|val6|val7|val4|v...
Full output in your input format:
id,col1,col2,fnlsrc
123,11|22|33||||||,val1|val3|val2,val1|val3|val2|val1|val3|val2|val1|val3|val2
456,99||77|||88|||||||||6|,val4|val5|val6|val7,val4|val5|val6|val7|val4|val5|val6|val7|val4|val5|val6|val7|val4|val5|val6|val7

pandas df: change values in column A only for rows that are unique in column B

It sounds like a trivial question and I'd expected to find a quick answer, but didn't have much success.
I have the dataframe population and columns A and B. I want to change the value in B to 1 only to those rows with unique value in column A (currently all rows in B hold the value 0).
I tried:
small_towns = population['A'].value_counts() == 1
population[population['A'] in small_towns]['B']=1
and got: 'Series' objects are mutable, thus they cannot be hashed
I also tried:
population.loc[population['A'].value_counts() == 1, population['B']] = 1
and got the same error with an aditional: pandas.core.indexing.IndexingError:
Any ideas?
Thanks in advance,
Ben
We can Series.duplicated with keep = False
this returns a Series with True on all duplicates values ​​and False on the rest. We can put 1 in rows with True using DataFrame.loc[]:
population.loc[~population['A'].duplicated(keep=False), 'B'] = 1
#population.loc[~population.duplicated(subset = 'A', keep=False), 'B'] = 1
We can also use Series.where or Series.mask
population['B'] = population['B'].where(population['A'].duplicated(keep=False), 1)
#population['B'] = population['B'].mask(~population['A'].duplicated(keep=False), 1)
but if you want to create a serie B with 1 or 0 you can simply do:
population['B'] = (~population['A'].duplicated(keep=False)).astype(int)
or
population['B'] = np.where(population['A'].duplicated(keep=False), 0, 1)

conditionally multiply values in DataFrame row

here is an example DataFrame:
df = pd.DataFrame([[1,0.5,-0.3],[0,-4,7],[1,0.12,-.06]], columns=['condition','value1','value2'])
I would like to apply a function which multiples the values ('value1' and 'value2' in each row by 100, if the value in the 'condition' column of that row is equal to 1, otherwise, it is left as is.
presumably some usage of .apply with a lambda function would work here but I am not able to get the syntax right. e.g.
df.apply(lambda x: 100*x if x['condition'] == 1, axis=1)
will not work
the desired output after applying this operation would be:
As simple as
df.loc[df.condition==1,'value1':]*=100
import numpy as np
df['value1'] = np.where(df['condition']==1,df['value1']*100,df['value1']
df['value2'] = np.where(df['condition']==1,df['value2']*100,df['value2']
In case multiple columns
# create a list of columns you want to apply condition
columns_list = ['value1','value2']
for i in columns_list:
df[i] = np.where(df['condition']==1,df[i]*100,df[i]
Use df.loc[] with the condition and filter the list of cols to operate then multiply:
l=['value1','value2'] #list of cols to operate on
df.loc[df.condition.eq(1),l]=df.mul(100)
#if condition is just 0 and 1 -> df.loc[df.condition.astype(bool),l]=df.mul(100)
print(df)
Another solution using df.mask() using same list of cols as above:
df[l]=df[l].mask(df.condition.eq(1),df[l]*100)
print(df)
condition value1 value2
0 1 50.0 -30.0
1 0 -4.0 7.0
2 1 12.0 -6.0
Use a mask to filter and where it is true choose second argument where false choose third argument is how np.where works
value_cols = ['value1','value2']
mask = (df.condition == 1)
df[value_cols] = pd.np.where(mask[:, None], df[value_cols].mul(100), df[value_cols])
If you have multiple value columns such as value1, value2 ... and so on, Use
value_cols = df.filter(regex='value\d').columns

Looking for NaN values in a specific column in df [duplicate]

Now I know how to check the dataframe for specific values across multiple columns. However, I cant seem to work out how to carry out an if statement based on a boolean response.
For example:
Walk directories using os.walk and read in a specific file into a dataframe.
for root, dirs, files in os.walk(main):
filters = '*specificfile.csv'
for filename in fnmatch.filter(files, filters):
df = pd.read_csv(os.path.join(root, filename),error_bad_lines=False)
Now checking that dataframe across multiple columns. The first value being the column name (column1), the next value is the specific value I am looking for in that column(banana). I am then checking another column (column2) for a specific value (green). If both of these are true I want to carry out a specific task. However if it is false I want to do something else.
so something like:
if (df['column1']=='banana') & (df['colour']=='green'):
do something
else:
do something
If you want to check if any row of the DataFrame meets your conditions you can use .any() along with your condition . Example -
if ((df['column1']=='banana') & (df['colour']=='green')).any():
Example -
In [16]: df
Out[16]:
A B
0 1 2
1 3 4
2 5 6
In [17]: ((df['A']==1) & (df['B'] == 2)).any()
Out[17]: True
This is because your condition - ((df['column1']=='banana') & (df['colour']=='green')) - returns a Series of True/False values.
This is because in pandas when you compare a series against a scalar value, it returns the result of comparing each row of that series against the scalar value and the result is a series of True/False values indicating the result of comparison of that row with the scalar value. Example -
In [19]: (df['A']==1)
Out[19]:
0 True
1 False
2 False
Name: A, dtype: bool
In [20]: (df['B'] == 2)
Out[20]:
0 True
1 False
2 False
Name: B, dtype: bool
And the & does row-wise and for the two series. Example -
In [18]: ((df['A']==1) & (df['B'] == 2))
Out[18]:
0 True
1 False
2 False
dtype: bool
Now to check if any of the values from this series is True, you can use .any() , to check if all the values in the series are True, you can use .all() .

Check if Pandas Dataframe group has 2 specific values in a column and return those rows

I have a groupby object. For each of these groups, I need to check, if a particular column has rows that contain value-A and value-B and return only those 2 rows within the group. If I use isin or "|" I would get cases where either one of these values are present. Right now I am doing a sloppy job of checking for first condition and then checking for second condition if first one is true and concatenating the results of both the checks.
My code is as follows:
import pandas as pd
from datetime import datetime, timedelta
from statistics import mean
dict = {'col-a': ['T1A', 'T1A', 'T1A', 'T1B', 'T1B', 'T1C', 'T1C', 'P1', 'P1'],
'col-b': ['07:57:00', '09:00:00', '12:00:00', '08:00:00', '08:25:00', '08:15:00', '07:25:00', '10:00:00', '07:45:00'],
'col-c': ['11111', '22222', '99999', '33333', '22222', '22222', '99999', '22222', '99999'],
'col-d': ['07:58:00', '09:01:00', '12:01:00', '08:01:00', '08:26:00', '08:16:00', '07:26:00', '10:01:00', '07:46:00'],
}
original_df = pd.DataFrame(dict)
print("original df\n", original_df)
# condition 1: must contain T1 in col-a
# condition 2: must contain 22222(variable) amongst each group of col-a
# condition 3: record containing 22222 should have col-b value between 7 and 9
# condition 4: must contain 99999(stays the same) among amongst each group of col-a where above conditions are met
no_to_check = '22222' # comes from another dataframe column
# filtering rows where col-a contains T1
filtered_df = original_df[original_df['col-a'].str.contains('T1')]
# grouping by col-a
trip_groups = filtered_df.groupby('col-a')
# checking if it contains '22222' in column c and '22222' has time between 7 and 9 in column b
trips_time_dict = {}
for group_key, group in trip_groups:
check1 = group[(group['col-c'] == no_to_check) & (group['col-b'].between('07:00:00', '09:00:00'))]
if len(check1) != 0:
# checking if the group contains '99999' in column c
check2 = group[group['col-c'] == '99999']
if len(check2) != 0:
all_conditions = pd.concat([check1,check2])
The desired output should contain one row for 22222 and one row for 99999 for each group that satisfies the criteria.
IIUC, you can do the following with df as your original dataframe:
df[df['col-a'].str.contains('T1')].groupby('col-a').apply(lambda x: x[(x['col-c']=='22222') & (x['col-b'].between('07:00:00', '09:00:00')) & (x['col-c']=='99999').any()])
Yields:
col-a col-b col-c col-d
col-a
T1A 1 T1A 09:00:00 22222 09:01:00
T1C 5 T1C 08:15:00 22222 08:16:00

Resources