CHR
SNP
BP
A1
A2
OR
P
8
rs62513865
101592213
T
C
1.00652
0.8086
8
rs79643588
106973048
A
T
1.01786
0.4606
I have this table example, and I want to filter rows by comparing column A1 with A2.
If this four conditions happen, delete the line
A1
A2
A
T
T
A
C
G
G
C
(e.g. line 2 in the first table).
How can i do that using python Pandas ?
here is one way to do it
Combine the two columns for each of the two DF. Make it a list in case of the second DF and search the first combination in the second one
df[~(df['A1']+df['A2']).str.strip()
.isin(df2['A1']+df2['A2'].tolist())]
CHR SNP BP A1 A2 OR P
0 8 rs62513865 101592213 T C 1.00652 0.8086
keeping
Assuming df1 and df2, you can simply merge to keep the common values:
out = df1.merge(df2)
output:
CHR SNP BP A1 A2 OR P
0 8 rs79643588 106973048 A T 1.01786 0.4606
dropping
For removing the rows, perform a negative merge:
out = (df1.merge(df2, how='outer', indicator=True)
.loc[lambda d: d.pop('_merge').eq('left_only')]
)
Or merge and get the remaining indices to drop (requires unique indices):
out = df1.drop(df1.reset_index().merge(df2)['index'])
output:
CHR SNP BP A1 A2 OR P
0 8.0 rs62513865 101592213.0 T C 1.00652 0.8086
alternative approach
As it seems you have nucleotides and want to drop the cases that do not match a A/T or G/C pair, you could translate A to T and C to G in A1 and check that the value is not identical to that of A2:
m = df1['A1'].map({'A': 'T', 'C': 'G'}).fillna(df1['A1']).ne(df1['A2'])
out = df1[m]
I am trying to finding out if points are in closed polygons (in this question: Finding if a point in a dataframe is in a polygon and assigning polygon name to point) but I realized that there might be another way to do this:
I have this dataframe
df=
id x_zone y_zone
0 A1 65.422080 48.147850
1 A1 46.635708 51.165745
2 A1 46.597984 47.657444
3 A1 68.477700 44.073700
4 A3 46.635708 54.108190
5 A3 46.635708 51.844770
6 A3 63.309560 48.826878
7 A3 62.215572 54.108190
and I would like to transform this into
id Polygon
0 A1 POLYGON((65.422080, 48.147850), (46.635708, 51.165745), (46.597984, 47.657444), (68.477700, 44.073700))
1 A3 POLYGON((46.635708,54.108190), (46.635708 ,51.844770), (63.309560, 48.826878),(62.215572 , 54.108190))
and do the same for points:
df1=
item x y
0 1 50 49
1 2 60 53
2 3 70 30
to
item point
0 1 POINT(50,49)
1 2 POINT(60,53)
2 3 POINT(70,30)
I have never used geopandas and am a little at a loss here.
My question is thus: How do I get from a pandas dataframe to a dataframe with geopandas attributes?
Thankful for any insight!
You can achieve as follows but you would have to set the right dtype. I know in ArcGIS you have to set the dtype as geometry;
df.groupby('id').apply(lambda x: 'POLYGON(' + str(tuple(zip(x['x_zone'],x['y_zone'])))+')')
I'd suggest the following to directly get a GeoDataFrame from your df:
from shapely.geometry import Polygon
import geopandas as gpd
gdf = gpd.GeoDataFrame(geometry=df.groupby('name').apply(
lambda g: Polygon(gpd.points_from_xy(g['x_zone'], g['y_zone']))))
It first creates a list of points using geopandas' points_from_xy, then create a Polygon object from this list.
I have a Pandas dataframe with two columns, "id" (a unique identifier) and "date", that looks as follows:
test_df.head()
id date
0 N1 2020-01-31
1 N2 2020-02-28
2 N3 2020-03-10
I have created a custom Python function that, given two date strings, will compute the absolute number of days between those dates (with a given date format string e.g. %Y-%m-%d), as follows:
def days_distance(date_1, date_1_format, date_2, date_2_format):
"""Calculate the number of days between two given string dates
Args:
date_1 (str): First date
date_1_format (str): The format of the first date
date_2 (str): Second date
date_2_format (str): The format of the second date
Returns:
The absolute number of days between date1 and date2
"""
date1 = datetime.strptime(date_1, date_1_format)
date2 = datetime.strptime(date_2, date_2_format)
return abs((date2 - date1).days)
I would like to create a distance matrix that, for all pairs of IDs, will calculate the number of days between those IDs. Using the test_df example above, the final time distance matrix should look as follows:
N1 N2 N3
N1 0 28 39
N2 28 0 11
N3 39 11 0
I am struggling to find a way to compute a distance matrix using a bespoke distance function, such as my days_distance() function above, as opposed to a standard distance measure provided for example by SciPy.
Any suggestions?
Let us try pdist + squareform to create a square distance matrix representing the pair wise differences between the datetime objects, finally create a new dataframe from this square matrix:
from scipy.spatial.distance import pdist, squareform
i, d = test_df['id'].values, pd.to_datetime(test_df['date'])
df = pd.DataFrame(squareform(pdist(d[:, None])), dtype='timedelta64[ns]', index=i, columns=i)
Alternatively you can also calculate the distance matrix using numpy broadcasting:
i, d = test_df['id'].values, pd.to_datetime(test_df['date']).values
df = pd.DataFrame(np.abs(d[:, None] - d), index=i, columns=i)
N1 N2 N3
N1 0 days 28 days 39 days
N2 28 days 0 days 11 days
N3 39 days 11 days 0 days
You can convert the date column to datetime format. Then create numpy array from the column. Then create a matrix with the array repeated 3 times. Then subtract the matrix with its transpose. Then convert the result to a dataframe
import pandas as pd
import numpy as np
from datetime import datetime
test_df = pd.DataFrame({'ID': ['N1', 'N2', 'N3'],
'date': ['2020-01-31', '2020-02-28', '2020-03-10']})
test_df['date_datetime'] = test_df.date.apply(lambda x : datetime.strptime(x, '%Y-%m-%d'))
date_array = np.array(test_df.date_datetime)
date_matrix = np.tile(date_array, (3,1))
date_diff_matrix = np.abs((date_matrix.T - date_matrix))
date_diff = pd.DataFrame(date_diff_matrix)
date_diff.columns = test_df.ID
date_diff.index = test_df.ID
>>> ID N1 N2 N3
ID
N1 0 days 28 days 39 days
N2 28 days 0 days 11 days
N3 39 days 11 days 0 days
I load some machine learning data from a CSV file. The first 2 columns are observations and the remaining columns are features.
Currently, I do the following:
data = pandas.read_csv('mydata.csv')
which gives something like:
data = pandas.DataFrame(np.random.rand(10,5), columns = list('abcde'))
I'd like to slice this dataframe in two dataframes: one containing the columns a and b and one containing the columns c, d and e.
It is not possible to write something like
observations = data[:'c']
features = data['c':]
I'm not sure what the best method is. Do I need a pd.Panel?
By the way, I find dataframe indexing pretty inconsistent: data['a'] is permitted, but data[0] is not. On the other side, data['a':] is not permitted but data[0:] is.
Is there a practical reason for this? This is really confusing if columns are indexed by Int, given that data[0] != data[0:1]
2017 Answer - pandas 0.20: .ix is deprecated. Use .loc
See the deprecation in the docs
.loc uses label based indexing to select both rows and columns. The labels being the values of the index or the columns. Slicing with .loc includes the last element.
Let's assume we have a DataFrame with the following columns:
foo, bar, quz, ant, cat, sat, dat.
# selects all rows and all columns beginning at 'foo' up to and including 'sat'
df.loc[:, 'foo':'sat']
# foo bar quz ant cat sat
.loc accepts the same slice notation that Python lists do for both row and columns. Slice notation being start:stop:step
# slice from 'foo' to 'cat' by every 2nd column
df.loc[:, 'foo':'cat':2]
# foo quz cat
# slice from the beginning to 'bar'
df.loc[:, :'bar']
# foo bar
# slice from 'quz' to the end by 3
df.loc[:, 'quz'::3]
# quz sat
# attempt from 'sat' to 'bar'
df.loc[:, 'sat':'bar']
# no columns returned
# slice from 'sat' to 'bar'
df.loc[:, 'sat':'bar':-1]
sat cat ant quz bar
# slice notation is syntatic sugar for the slice function
# slice from 'quz' to the end by 2 with slice function
df.loc[:, slice('quz',None, 2)]
# quz cat dat
# select specific columns with a list
# select columns foo, bar and dat
df.loc[:, ['foo','bar','dat']]
# foo bar dat
You can slice by rows and columns. For instance, if you have 5 rows with labels v, w, x, y, z
# slice from 'w' to 'y' and 'foo' to 'ant' by 3
df.loc['w':'y', 'foo':'ant':3]
# foo ant
# w
# x
# y
Note: .ix has been deprecated since Pandas v0.20. You should instead use .loc or .iloc, as appropriate.
The DataFrame.ix index is what you want to be accessing. It's a little confusing (I agree that Pandas indexing is perplexing at times!), but the following seems to do what you want:
>>> df = DataFrame(np.random.rand(4,5), columns = list('abcde'))
>>> df.ix[:,'b':]
b c d e
0 0.418762 0.042369 0.869203 0.972314
1 0.991058 0.510228 0.594784 0.534366
2 0.407472 0.259811 0.396664 0.894202
3 0.726168 0.139531 0.324932 0.906575
where .ix[row slice, column slice] is what is being interpreted. More on Pandas indexing here: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-advanced
Lets use the titanic dataset from the seaborn package as an example
# Load dataset (pip install seaborn)
>> import seaborn.apionly as sns
>> titanic = sns.load_dataset('titanic')
using the column names
>> titanic.loc[:,['sex','age','fare']]
using the column indices
>> titanic.iloc[:,[2,3,6]]
using ix (Older than Pandas <.20 version)
>> titanic.ix[:,[‘sex’,’age’,’fare’]]
or
>> titanic.ix[:,[2,3,6]]
using the reindex method
>> titanic.reindex(columns=['sex','age','fare'])
Also, Given a DataFrame
data
as in your example, if you would like to extract column a and d only (e.i. the 1st and the 4th column), iloc mothod from the pandas dataframe is what you need and could be used very effectively. All you need to know is the index of the columns you would like to extract. For example:
>>> data.iloc[:,[0,3]]
will give you
a d
0 0.883283 0.100975
1 0.614313 0.221731
2 0.438963 0.224361
3 0.466078 0.703347
4 0.955285 0.114033
5 0.268443 0.416996
6 0.613241 0.327548
7 0.370784 0.359159
8 0.692708 0.659410
9 0.806624 0.875476
You can slice along the columns of a DataFrame by referring to the names of each column in a list, like so:
data = pandas.DataFrame(np.random.rand(10,5), columns = list('abcde'))
data_ab = data[list('ab')]
data_cde = data[list('cde')]
And if you came here looking for slicing two ranges of columns and combining them together (like me) you can do something like
op = df[list(df.columns[0:899]) + list(df.columns[3593:])]
print op
This will create a new dataframe with first 900 columns and (all) columns > 3593 (assuming you have some 4000 columns in your data set).
Here's how you could use different methods to do selective column slicing, including selective label based, index based and the selective ranges based column slicing.
In [37]: import pandas as pd
In [38]: import numpy as np
In [43]: df = pd.DataFrame(np.random.rand(4,7), columns = list('abcdefg'))
In [44]: df
Out[44]:
a b c d e f g
0 0.409038 0.745497 0.890767 0.945890 0.014655 0.458070 0.786633
1 0.570642 0.181552 0.794599 0.036340 0.907011 0.655237 0.735268
2 0.568440 0.501638 0.186635 0.441445 0.703312 0.187447 0.604305
3 0.679125 0.642817 0.697628 0.391686 0.698381 0.936899 0.101806
In [45]: df.loc[:, ["a", "b", "c"]] ## label based selective column slicing
Out[45]:
a b c
0 0.409038 0.745497 0.890767
1 0.570642 0.181552 0.794599
2 0.568440 0.501638 0.186635
3 0.679125 0.642817 0.697628
In [46]: df.loc[:, "a":"c"] ## label based column ranges slicing
Out[46]:
a b c
0 0.409038 0.745497 0.890767
1 0.570642 0.181552 0.794599
2 0.568440 0.501638 0.186635
3 0.679125 0.642817 0.697628
In [47]: df.iloc[:, 0:3] ## index based column ranges slicing
Out[47]:
a b c
0 0.409038 0.745497 0.890767
1 0.570642 0.181552 0.794599
2 0.568440 0.501638 0.186635
3 0.679125 0.642817 0.697628
### with 2 different column ranges, index based slicing:
In [49]: df[df.columns[0:1].tolist() + df.columns[1:3].tolist()]
Out[49]:
a b c
0 0.409038 0.745497 0.890767
1 0.570642 0.181552 0.794599
2 0.568440 0.501638 0.186635
3 0.679125 0.642817 0.697628
Another way to get a subset of columns from your DataFrame, assuming you want all the rows, would be to do:
data[['a','b']] and data[['c','d','e']]
If you want to use numerical column indexes you can do:
data[data.columns[:2]] and data[data.columns[2:]]
Its equivalent
>>> print(df2.loc[140:160,['Relevance','Title']])
>>> print(df2.ix[140:160,[3,7]])
if Data frame look like that:
group name count
fruit apple 90
fruit banana 150
fruit orange 130
vegetable broccoli 80
vegetable kale 70
vegetable lettuce 125
and OUTPUT could be like
group name count
0 fruit apple 90
1 fruit banana 150
2 fruit orange 130
if you use logical operator np.logical_not
df[np.logical_not(df['group'] == 'vegetable')]
more about
https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.logic.html
other logical operators
logical_and(x1, x2, /[, out, where, ...]) Compute the truth value of
x1 AND x2 element-wise.
logical_or(x1, x2, /[, out, where, casting,
...]) Compute the truth value of x1 OR x2 element-wise.
logical_not(x, /[, out, where, casting, ...]) Compute the truth
value of NOT x element-wise.
logical_xor(x1, x2, /[, out, where, ..]) Compute the truth value of x1 XOR x2, element-wise.
You can use the method truncate
df = pd.DataFrame(np.random.rand(10, 5), columns = list('abcde'))
df_ab = df.truncate(before='a', after='b', axis=1)
df_cde = df.truncate(before='c', axis=1)
I have a closed contour in the form of a polyline. I am accessing the point
through vtkPolyData.GetLines() and iterating through the cells in
vtkCellArray.
I want to calculate the angle bisector at each vertex of the line. Therefore
I need to know the coordinate of V_{i-1}, V_i and V_{i+1}.
In the vtkCellArray, [n0, p_1, p_2,... , p_n0, ... ] , if p_2 comes after
p_1 in the cell , does it mean that p_1 and p_2 are connected together?
Yes, it does. Just to test your case with vtkPolyLine, let's create a vtkPolyData with a single vtkPolyLine where the last point of the line is same as the first point. We will see that the resultant cell array has the same sequence (i.e. the last and first point are the same.)
import vtk as v
pts = v.vtkPoints()
pts.InsertNextPoint(0,0,0)
pts.InsertNextPoint(1,0,0)
pts.InsertNextPoint(2,0,0)
pts.InsertNextPoint(3,0,0)
polyLine = v.vtkPolyLine()
polyLine.GetPointIds().SetNumberOfIds(5)
polyLine.GetPointIds().SetId(0,0)
polyLine.GetPointIds().SetId(1,1)
polyLine.GetPointIds().SetId(2,2)
polyLine.GetPointIds().SetId(3,3)
polyLine.GetPointIds().SetId(4,0)
lines = v.vtkCellArray()
lines.InsertNextCell(polyLine)
pd = v.vtkPolyData()
pd.SetPoints(pts)
pd.SetLines(lines)
wr = v.vtkPolyDataWriter()
wr.SetFileName('Lines.vtk')
wr.SetInputData(pd)
wr.Write()
The file Lines.vtk contains the following:
# vtk DataFile Version 4.2
vtk output
ASCII
DATASET POLYDATA
POINTS 4 float
0 0 0 1 0 0 2 0 0
3 0 0
LINES 1 6
5 0 1 2 3 0 # This line has 5 points and last and first point are the same (0)