pandas: removing duplicate values in rows with same index in two columns - python-3.x

I have a dataframe as follows:
import numpy as np
import pandas as pd
df = pd.DataFrame({'text':['she is good', 'she is bad'], 'label':['she is good', 'she is good']})
I would like to compare row wise and if two same-indexed rows have the same values, replace the duplicate in the 'label' column with the word 'same'.
Desired output:
pos label
0 she is good same
1 she is bad she is good
so far, i have tried the following, but it returns an error:
ValueError: Length of values (1) does not match length of index (2)
df['label'] =np.where(df.query("text == label"), df['label']== ' ',df['label']==df['label'] )

Your syntax is not correct, have a look at the documentation of numpy.where.
Check for equality between your two columns, and replace the values in your label column:
import numpy as np
df['label'] = np.where(df['text'].eq(df['label']),'same',df['label'])
prints:
text label
0 she is good same
1 she is bad she is good

Related

pandas split all list column and get first value

I am trying to get first element in the list for all rows and column into a single dataframe. All of the rows and columns have list format. It contains 2 elements in each list. Here is what I tried. What syntax should I use to apply entire dataframe in pandas?
import pandas as pd
import numpy as np
def my_function(x):
return x.replace('\[','').replace('\]','').split(',')[0]
t = pd.DataFrame(data={'col1': ['[blah,blah]','[test,bing]',np.NaN], 'col2': ['[math,sci]',np.NaN,['number','4']]})
print(t)
not working:
t.apply(my_function) # AttributeError: 'Series' object has no attribute 'split'
t.apply(lambda x: str(x).replace('\[','').replace('\]','').split(',')[0]) # does not work
t.apply(lambda x: list(x)[0]) # gives first column and doesn't split
trying to get this:
col1 col2
blah math
test NaN
NaN number
Use replace:
>>> t.replace(r'\[([^,]*).*', r'\1', regex=True)
col1 col2
0 blah math
1 test NaN
2 NaN number
But I think you have an error when you create your sample dataframe. I changed to:
t = pd.DataFrame(data={'col1': ['[blah,blah]','[test,bing]',np.NaN],
'col2': ['[math,sci]',np.NaN,'[number,4]']})
# The problem is here ------------------------------^^^^^^^^^^^^
Link to regex101

Apply function to specific rows of dataframe column

I have a following column in a dataframe:
COLUMN_NAME
1
0
1
1
65280
65376
65280
I want to convert 5 digit values in a column to their corresponding binary values. I know how to convert it by using bin() function, but i don't know how to apply it only to rows that has 5digits.
Note that the column contains only values with either 1 or 5 digits. Values with 1 digit is only 1 or 0.
import pandas as pd
import numpy as np
data = {'c': [1,0,1,1,65280,65376,65280] }
df = pd.DataFrame (data, columns = ['c'])
// create another column 'clen' which has length of 'c'
df['clen'] = df['c'].astype(str).map(len)
//check condition and apply bin function to entire column
df.loc[df['clen']==5,'c'] = df['c'].apply(bin)

P-value normal test for multiple rows

I got the following simple code to calculate normality over an array:
import pandas as pd
df = pd.read_excel("directory\file.xlsx")
import numpy as np
x=df.iloc[:,1:].values.flatten()
import scipy.stats as stats
from scipy.stats import normaltest
stats.normaltest(x,axis=None)
This gives me nicely a p-value and a statistic.
The only thing I want right now is to:
Add 2 columns in the file with this p value and statistic and if i have multiple rows, do it for all the rows (calculate p value & statistic for each row and add 2 columns with these values in it).
Can someone help?
If you want to calculate row-wise normaltest, you should not flatten your data in x and use axis=1 such as
df = pd.DataFrame(np.random.random(105).reshape(5,21)) # to generate data
# calculate normaltest row-wise without the first column like you
df['stat'] ,df['p'] = stats.normaltest(df.iloc[:,1:],axis=1)
Then df contains two columns 'stat' and 'p' with the values your are looking for IIUC.
Note: to be able to perform normaltest, you need at least 8 values (according to what I experienced) so you need at least 8 columns in df.iloc[:,1:] otherwise it will raise an error. And even, it would be better to have more than 20 values in each row.

Generating DataFrame from Multiplication Table Printout - Python 3.x

In my following code, I am producing print-outs of randomly generated multiplication tables. I would like to make each table generated into a DataFrame. How would I do this? (New in Python 3.x)
This exercise was to generate a multiplication table. It expanded to a project to generate a set number of
multiplication tables with randomly generated one- or two-digit column and row numbers. Currently it is set to run five tables, each with 8 columns and rows. However, these numbers can be changed. Jupyter Notebook can
only print up to 12 columns nicely. While our program will generate as many columns and rows as we want (of equal size, eg, 6x6, 3x3, 9x9, etc), limiting it to a 12x12 matrix or smaller is best for viewing.
import pandas as pd
import numpy as np
%matplotlib notebook
# This sets up how many tables we will generate
for t in range(0,5):
# Make variable place holders for our columns and rows list
a=[]
b=[]
# To use randomly generated numbers, this sets up the random column numbers 'a' and random row numbers 'b'
import random
for x in range(12):
a.append(random.randint(41,99)) # We can adjust the range of the random selection of numbers here
b.append(random.randint(1,35)) # We can adjust the range of the random selection of number here
# Add the column titles for each table - these are the random numbers 'a'
print("C/R: ", end="\t ")
for number in a:
print(number,end = '\t ')
print()
# The double for-loop to generate the table
for row in b:
print(row, end="\t") # First column
for number in a:
print(round(row*number,1),end='\t' )# Next columns
print( )
# Add two blank cosmetic lines between tables for readability
print('\n\n')
Thanks.
Based upon Josewails' assistance, this is the code for which I was looking. Josewails, thank you.
import pandas as pd
import numpy as np
%matplotlib notebook
# This sets up how many tables we will generate
for t in range(0,5):
# Make variable place holders for our columns and rows list
a = []
b = []
dataframes = []
# To use randomly generated numbers, this sets up the random column numbers 'a' and random row numbers 'b'
import random
for x in range(12):
a.append(random.randint(41,99)) # We can adjust the range of the random selection of numbers here
b.append(random.randint(1,35)) # We can adjust the range of the random selection of number here
data = []
for row in b:
temp = []
for number in a:
temp.append(round(row*number,1))
data.append(temp)
dataframe = pd.DataFrame(data=data, columns=a)
dataframe.index = b
dataframes.append(dataframe)
display(dataframes[0])
Refactoring your code a bit should give the desired output.
import pandas as pd
import numpy as np
%matplotlib notebook
# This sets up how many tables we will generate
dataframes = []
for t in range(0,5):
# Make variable place holders for our columns and rows list
a = []
b = []
# To use randomly generated numbers, this sets up the random column numbers 'a' and random row numbers 'b'
import random
for x in range(12):
a.append(random.randint(41,99)) # We can adjust the range of the random selection of numbers here
b.append(random.randint(1,35)) # We can adjust the range of the random selection of number here
data = []
for row in b:
temp = []
for number in a:
temp.append(round(row*number,1))
data.append(temp)
dataframe = pd.DataFrame(data=data, columns=a)
dataframe.index = b
dataframes.append(dataframe)
dataframes[0]
This is the output.
pandas dataframe

Re-indexing a pandas Series extracted from DataFrame

Problematic
Let us say that I have a DataFrame df of n rows like this:
| Precipitation | Discharge |
|:------12-------:|:-----16-----:|
|:------10-------:|:-----15-----:|
|:------12-------:|:-----16-----:|
|:------10-------:|:-----15-----:|
...
|:------12-------:|:-----16-----:|
|:------10-------:|:-----15-----:|
Rows are automatically indexed from 1 to n. When we extract one column for example:
series = df.loc[5:,['Precipitation']]
or
series = df.Precipitation[5:]
The extracted series tends to be like:
Precipitation
5 16
6 17
7 18
...
n 15
So the question is how can we modify the generic indexing from 5 to n to 0 to n-5.
Note that I have tried series.reindex() and series.reset_index() but neither of them works...
Currently I do series.tolist() to solve the problem but is there any way more elegant and smarter?
Many thanks in advance !
If I understand your question correctly, you want to select the first n-5 rows of your column.
import pandas as pd
df = pd.DataFrame({'col1': [1,2,3,4,5,6,7], 'col2': [3,4,3,2,5,9,7]})
df.loc[:len(df)-5,['col1']]
You may want to read up on slicing with pandas: https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges
EDIT:
I think I misunderstood, and what you instead want is to 'reset' your index. You can do that with the method reset_index()
import pandas as pd
df = pd.DataFrame({'col1': [1,2,3,4,5,6,7], 'col2': [3,4,3,2,5,9,7]})
df = df.loc[5:,['col1']]
df.reset_index(drop=True)

Resources