I have a list of specific company identifications numbers.
ex. companyID = ['1','2','3']
and I have a dataframe of different attributes relating to company business.
ex. company_df
There are multiple columns where values from my list could be.
ex. 'company_number', 'company_value', 'job_referred_by', etc.
How can I check if any value from my companyID list exists anywhere in my company_df, regardless of datatype, and return only the columns where a companyID is found?
This is what I have tried, to no luck:
def find_any(company_df, companyID):
found = company_df.isin(companyID).any()
foundCols = found.index[found].tolist()
print(foundCols)
Create a df from your list of companyIDs and then merge the two dfs on company ID. Then filter the df to show only the rows that match.
For datatypes, you can convert int to string no problem, but the other way around would crash if you have a string that can't be converted to int (e.g., 'a'), so I'd use string.
Here's a toy example:
company_df = pd.DataFrame({'co_id': [1, 2, 4, 9]})
company_df['co_id'] = company_df['co_id'].astype(str)
companyID = ['1','2','3']
df_companyID = pd.DataFrame(companyID, columns=['co_id'])
company_df = company_df.merge(df_companyID, on='co_id', how='left', indicator=True)
print(company_df)
# co_id _merge
# 0 1 both
# 1 2 both
# 2 4 left_only
# 3 9 left_only
company_df_hits_only = company_df[company_df['_merge'] == 'both']
del company_df['_merge']
del company_df_hits_only['_merge']
print(company_df_hits_only)
# co_id
# 0 1
# 1 2
I have the output from
df = pd.DataFrame.from_records(get_data)
display(df_data)
Output
'f_data':[{'fid': '9.3', 'lfid': '39.3'}, {'fid': '839.4', 'lfid': '739.3'}]
Needed output format like below
f_data
fid
lfid
9.3
39.3
839.4
739.3
Try with dict get the correct key
d = {'f_data':[{'fid': '9.3', 'lfid': '39.3'}, {'fid': '839.4', 'lfid': '739.3'}]}
out = pd.DataFrame(d['f_data'])
Out[147]:
fid lfid
0 9.3 39.3
1 839.4 739.3
I'm trying to apply a function to a specific column in this dataframe
datetime PM2.5 PM10 SO2 NO2
0 2013-03-01 7.125000 10.750000 11.708333 22.583333
1 2013-03-02 30.750000 42.083333 36.625000 66.666667
2 2013-03-03 76.916667 120.541667 61.291667 81.000000
3 2013-03-04 22.708333 44.583333 22.854167 46.187500
4 2013-03-06 223.250000 265.166667 116.236700 142.059383
5 2013-03-07 263.375000 316.083333 97.541667 147.750000
6 2013-03-08 221.458333 297.958333 69.060400 120.092788
I'm trying to apply this function(below) to a specific column(PM10) of the above dataframe:
range1 = [list(range(0,50)),list(range(51,100)),list(range(101,200)),list(range(201,300)),list(range(301,400)),list(range(401,2000))]
def c1_c2(x,y):
for a in y:
if x in a:
min_val = min(a)
max_val = max(a)+1
return max_val - min_val
Where "x" can be any column and "y" = Range1
Available Options
df.PM10.apply(c1_c2,args(df.PM10,range1),axis=1)
df.PM10.apply(c1_c2)
I've tried these couple of available options and none of them seems to be working. Any suggestions?
Not sure what the expected output is from the function. But to get the function getting called you can try the following
from functools import partial
df.PM10.apply(partial(c1_c2, y=range1))
Update:
Ok, I think I understand a little better. This should work, but 'range1' is a list of lists of integers. Your data doesn't have integers and the new column comes up empty. I created another list based on your initial data that works. See below:
df = pd.read_csv('pm_data.txt', header=0)
range1= [[7.125000,10.750000,11.708333,22.583333],list(range(0,50)),list(range(51,100)),list(range(101,200)),
list(range(201,300)),list(range(301,400)),list(range(401,2000))]
def c1_c2(x,y):
for a in y:
if x in a:
min_val = min(a)
max_val = max(a)+1
return max_val - min_val
df['function']=df.PM10.apply(lambda x: c1_c2(x,range1))
print(df.head(10))
datetime PM2.5 PM10 SO2 NO2 new_column function
0 2013-03-01 7.125000 10.750000 11.708333 22.583333 25.750000 16.458333
1 2013-03-02 30.750000 42.083333 36.625000 66.666667 2.104167 NaN
2 2013-03-03 76.916667 120.541667 61.291667 81.000000 6.027083 NaN
3 2013-03-04 22.708333 44.583333 22.854167 46.187500 2.229167 NaN
4 2013-03-06 223.250000 265.166667 116.236700 142.059383 13.258333 NaN
5 2013-03-07 263.375000 316.083333 97.541667 147.750000 15.804167 NaN
6 2013-03-08 221.458333 297.958333 69.060400 120.092788 14.897917 NaN
Only the first item in 'function' had a match because it came from your initial data because of 'if x in a'.
Old Code:
I'm also not sure what you are doing. But you can use a lambda to modify columns or create new ones.
Like this,
import pandas as pd
I created a data file to import from the data you posted above:
datetime,PM2.5,PM10,SO2,NO2
2013-03-01,7.125000,10.750000,11.708333,22.583333
2013-03-02,30.750000,42.083333,36.625000,66.666667
2013-03-03,76.916667,120.541667,61.291667,81.000000
2013-03-04,22.708333,44.583333,22.854167,46.187500
2013-03-06,223.250000,265.166667,116.236700,142.059383
2013-03-07,263.375000,316.083333,97.541667,147.750000
2013-03-08,221.458333,297.958333,69.060400,120.092788
Here is how I import it,
df = pd.read_csv('pm_data.txt', header=0)
and create a new column and apply a function to the data in 'PM10'
df['new_column'] = df['PM10'].apply(lambda x: x+15 if x < 30 else x/20)
which yields,
datetime PM2.5 PM10 SO2 NO2 new_column
0 2013-03-01 7.125000 10.750000 11.708333 22.583333 25.750000
1 2013-03-02 30.750000 42.083333 36.625000 66.666667 2.104167
2 2013-03-03 76.916667 120.541667 61.291667 81.000000 6.027083
3 2013-03-04 22.708333 44.583333 22.854167 46.187500 2.229167
4 2013-03-06 223.250000 265.166667 116.236700 142.059383 13.258333
5 2013-03-07 263.375000 316.083333 97.541667 147.750000 15.804167
6 2013-03-08 221.458333 297.958333 69.060400 120.092788 14.897917
Let me know if this helps.
"I've tried these couple of available options and none of them seems to be working..."
What do you mean by this? What's your output, are you getting errors or what?
I see a couple of problems:
range1 lists contain int while your column values are float, so c1_c2() will return None.
if the data types were the same within range1 and columns, c1_c2() will return None when value is not in range1.
Below is how I would do it, assuming the data-types match:
def c1_c2(x):
range1 = [list of lists]
for a in range1:
if x in a:
min_val = min(a)
max_val = max(a)+1
return max_val - min_val
return x # returns the original value if not in range1
df.PM10.apply(c1_c2)
I have stock data set like
**Date Open High ... Close Adj Close Volume**
0 2014-09-17 465.864014 468.174011 ... 457.334015 457.334015 21056800
1 2014-09-18 456.859985 456.859985 ... 424.440002 424.440002 34483200
2 2014-09-19 424.102997 427.834991 ... 394.795990 394.795990 37919700
3 2014-09-20 394.673004 423.295990 ... 408.903992 408.903992 36863600
4 2014-09-21 408.084991 412.425995 ... 398.821014 398.821014 26580100
I need to cumulative sum the columns Open,High,Close,Adj Close, Volume
I tried this df.cumsum(), its shows the the error time stamp error.
I think for processing trade data is best create DatetimeIndex:
#if necessary
#df['Date'] = pd.to_datetime(df['Date'])
df = df.set_index('Date')
And then if necessary cumulative sum for all column:
df = df.cumsum()
If want cumulative sum only for some columns:
cols = ['Open','High','Close','Adj Close','Volume']
df[cols] = df.cumsum()
I have lists which I want to insert it as column labels.
But when I use read_excel of pandas, they always consider 0th row as column label.
How could I read the file as pandas dataframe and then put the list as column label
orig_index = pd.read_excel(basic_info, sheetname = 'KI12E00')
0.619159 0.264191 0.438849 0.465287 0.445819 0.412582 0.397366 \
0 0.601379 0.303953 0.457524 0.432335 0.415333 0.382093 0.382361
1 0.579914 0.343715 0.418294 0.401129 0.385508 0.355392 0.355123
Here is my personal list for column name
print set_index
[20140109, 20140213, 20140313, 20140410, 20140508, 20140612]
And I want to make dataframe as below
20140109 20140213 20140313 20140410 20140508 20140612
0 0.619159 0.264191 0.438849 0.465287 0.445819 0.412582 0.397366 \
1 0.601379 0.303953 0.457524 0.432335 0.415333 0.382093 0.382361
2 0.579914 0.343715 0.418294 0.401129 0.385508 0.355392 0.355123
Pass header=None to tell it there isn't a header, and you can pass a list in names to tell it what you want to use at the same time. (Note that you're missing a column name in your example; I'm assuming that's accidental.)
For example:
>>> df = pd.read_excel("out.xlsx", header=None)
>>> df
0 1 2 3 4 5 6
0 0.619159 0.264191 0.438849 0.465287 0.445819 0.412582 0.397366
1 0.601379 0.303953 0.457524 0.432335 0.415333 0.382093 0.382361
2 0.579914 0.343715 0.418294 0.401129 0.385508 0.355392 0.355123
or
>>> names = [20140109, 20140213, 20140313, 20140410, 20140508, 20140612, 20140714]
>>> df = pd.read_excel("out.xlsx", header=None, names=names)
>>> df
20140109 20140213 20140313 20140410 20140508 20140612 20140714
0 0.619159 0.264191 0.438849 0.465287 0.445819 0.412582 0.397366
1 0.601379 0.303953 0.457524 0.432335 0.415333 0.382093 0.382361
2 0.579914 0.343715 0.418294 0.401129 0.385508 0.355392 0.355123
And you can always set the column names after the fact by assigning to df.columns.