How to calculate active hours of an employee using face_recognition for attendance tracking - python-3.x

I am working on face recognition system for my academic project. I want to set the first time an employee was recognized as his first active time and the next time he is being recognized should be recorded as his last active time and then calculate the total active hours based on first active and last active time.
I tried with the following code but I'm getting only the current system time as the start time. can someone help me on what I am doing wrong.
Code:
data = pickle.loads(open(args["encodings"], "rb").read())
vs = VideoStream(src=0).start()
writers = None
time.sleep(2.0)
while True:
frame = vs.read()
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
rgb = imutils.resize(frame, width=750)
r = frame.shape[1] / float(rgb.shape[1])
boxes = face_recognition.face_locations(rgb)
encodings = face_recognition.face_encodings(rgb, boxes)
names = []
face_names = []
for encoding in encodings:
matches = face_recognition.compare_faces(data["encodings"],
encoding)
name = "Unknown"
if True in matches:
matchedIdxs = [i for (i, b) in enumerate(matches) if b]
counts = {}
for i in matchedIdxs:
name = data["names"][i]
counts[name] = counts.get(name, 0) + 1
name = max(counts, key=counts.get)
names.append(name)
if names != []:
for i in names:
first_active_time = datetime.now().strftime('%H:%M')
last_active_time = datetime.now().strftime('%H:%M')
difference = datetime.strptime(first_active_time, '%H:%M') - datetime.strptime(last_active_time, '%H:%M')
difference = difference.total_seconds()
total_hours = time.strftime("%H:%M", time.gmtime(difference))
face_names.append([i, first_active_time, last_active_time, total_hours])

Related

networkx output scale problem with matplotlib (re-post)

I'm re-posting this question since I didn't make a good example code in last question.
I'm trying to make a nodes to set in specific location.
But I found out that the output drawing is not... fixed. Let me show you the pic.
So this is the one I make with 10 nodes. worked perfectly as I intended.
Also it has plt.text on the bottom left.
And here's the other picture
As you can see, something is wrong. plt.text is gone, and USA's location is weird. Actually that location is where DEU is located in the first pic. Both pics use same code.
Now, let me show you some of my code.
for spec_df, please download from my gdrive:
https://drive.google.com/drive/folders/11X_i5-pRLGBfQ9vIwQ3hfDU5EWIfR3Uo?usp=sharing
auto_flag = 0
spec_df=pd.read_stata("C:\\"Your_file_loc"\\CombinedHS6_example.dta")
#top_10_list = ["USA","CHN","KOR"] (Try for three nodes)
#or
#auto_flag = 1 (Try for 10 nodes)
df_p = spec_df[['partneriso3','tradevalue']]
df_p = df_p.groupby('partneriso3').sum().reset_index()
df_r = spec_df[['reporteriso3','tradevalue']]
df_r = df_r.groupby('reporteriso3').sum().reset_index()
df_r = df_r.rename(columns={'reporteriso3': 'Nation'})
df_r = df_r.rename(columns={'tradevalue': 'tradevalue_r'})
df_p = df_p.rename(columns={'partneriso3': 'Nation'})
df_s = pd.merge(df_r, df_p, on='Nation', how='outer').fillna(0)
df_s["final"] = df_s['tradevalue'] + df_s['tradevalue_r']
if auto_flag == 1:
df_s = df_s.sort_values(by=['final'], ascending = False).reset_index()
cut = df_s[:10]
else:
cut = df_s[(df_s['Nation'].isin(top_10_list))]
cut['final'] = cut['final'].apply(lambda x: normalize(x, cut['final'].max()))
cut['font_size'] = cut['final'] * 13
cut['final'] = cut['final'] * 1500
top_10_list = list(cut["Nation"])
top10 = spec_df[(spec_df['reporteriso3'].isin(top_10_list))&(spec_df['partneriso3'].isin(top_10_list))]
top10['tradevalue'] = top10['tradevalue'].apply(lambda x: normalize(x, top10['tradevalue'].max()))
top10['tradevalue'] = top10['tradevalue']*10
plt.figure(figsize=(10,10), dpi = 100)
G = nx.from_pandas_edgelist(top10, 'reporteriso3', 'partneriso3', 'tradevalue', create_using= nx.DiGraph())
widths = nx.get_edge_attributes(G,'tradevalue')
pos = {}
pos_cord = [(-0.30779309, -0.26419882), (0.26767895, 0.19524759), (-0.38479095, 0.88179998), (0.33785317, 0.96090914), (0.94090464, 0.40707934), (0.9270665, -0.38403114), (0.41246223, -0.85684049), (-0.32083322, -1.0), (-0.99724456, -0.34947554), (-0.87530367, 0.40950993)]
for t in range(len(top_10_list)):
if top_10_list == "":
continue
else:
pos[top_10_list[t]] = pos_cord[t]
pos_nodes = nudge(pos, 0, 0.12)
nx.draw_networkx_edges(G,pos, width=list(widths.values()), edge_color = '#9ECAE4')
nx.draw_networkx_nodes(G, pos=pos, nodelist = cut['Nation'], node_size= cut['final'], node_color ='#AB89EF', edgecolors ='#000000')
nx.draw_networkx_labels(G,pos_nodes, font_size=15)
plt.text(-1.15,-1.15,s='hs : ')
plt.savefig(location,dpi=300)
Sorry for the crude code. But I want to ask that I'm using fixed coordinates. So nodes are not supposed to move there location. So I think the plt's size is kinda interacting with the contents...? But I don't know how it does that.
Could anyone enlighten me please? This drives me crazy...
Thanks to #Paul Brodersen's comment, I found a way to fix the location.
I just added these codes in my codes.
fig = plt.figure(figsize=(10,10), dpi = 100)
axes = fig.add_axes([0,0,1,1])
axes.set_xlim([-1.3,1.3])
axes.set_ylim([-1.3,1.3])
Thank you for the help again!

What is the problem in the following code?

I want to generate auto generate book fine of 10% of book cost. I have written the following code but nothing happens. No error comes and not working. book_cost field is in book module.
Please check code.
issue_date = fields.Date('Issue Date', required=True, tracking=True)
due_date = fields.Date('Due Date', required=True, tracking=True)
book_ids = fields.Many2many('odooschool.library.books','tch_book_rel','book_name','teacher_id','Issued Books')
sequence = fields.Integer('sequence')
fine_amount = fields.Char('Fine Amount', compute='_get_cost_details')
submission_date = fields.Date.today()
price = fields.Char('Price')
#api.depends('due_date','book_ids.book_cost')
def _get_cost_details(self):
market_multiplier = 0
date_return = fields.Date()
for rec in self:
fine_amount = 0
if rec.due_date and rec.submission_date and rec.due_date > rec.submission_date:
date_return = (rec.due_date - rec.submission_date)
market_multiplier = int(decimal.Decimal('0.10'))
fine_amount = rec.book_ids.book_cost * market_multiplier
rec.fine_amount += rec.fine_amount
I think if you replace
submission_date = fields.Date.today()
by
submission_date = fields.Date(default= fields.Date.today)
That will be work. Cause the submission_date in your code is always the starting date of Odoo server.
Regards

Calculate percentage change in pandas with rows that contain the same values

I am using Pandas to calculate percentage change(s) between values that occur more than once in the column of interest.
I want to compare the values of last weeks workout provided they're the same exercise type to get the percentage change of (weight used, reps accomplished )
I am able to get the percentages of all the rows which is halfway what I want but the conditional part is missing - so only get the percentages if the exercise_name is of the same value as we want to compare how we improve on a weekly, bi-weekly basis.
ids = self.user_data["exercise"].fillna(0)
dups = self.user_data[ids.isin(ids[ids.duplicated()])].sort_values("exercise")
dups['exercise'] = dups['exercise'].astype(str)
dups['set_one_weight'] = pd.to_numeric(dups['set_one_weight'])
dups['set_two_weight'] = pd.to_numeric(dups['set_two_weight'])
dups['set_three_weight'] = pd.to_numeric(dups['set_three_weight'])
dups['set_four_weight'] = pd.to_numeric(dups['set_four_weight'])
dups['set_one'] = pd.to_numeric(dups['set_one'])
dups['set_two'] = pd.to_numeric(dups['set_two'])
dups['set_three'] = pd.to_numeric(dups['set_three'])
dups['set_four'] = pd.to_numeric(dups['set_four'])
**percent_change = dups[['set_three_weight']].pct_change()**
the last line gets the percentage change for all the rows for column set_three_weight but is unable to do what I want above which is find rows with same name and obtain the percentage change.
UPDATE
Using Group By Solution
ids = self.user_data["exercise"].fillna(0)
dups = self.user_data[ids.isin(ids[ids.duplicated()])].sort_values("exercise")
dups['exercise'] = dups['exercise'].astype(str)
dups['set_one_weight'] = pd.to_numeric(dups['set_one_weight'])
dups['set_two_weight'] = pd.to_numeric(dups['set_two_weight'])
dups['set_three_weight'] = pd.to_numeric(dups['set_three_weight'])
dups['set_four_weight'] = pd.to_numeric(dups['set_four_weight'])
dups['set_one'] = pd.to_numeric(dups['set_one'])
dups['set_two'] = pd.to_numeric(dups['set_two'])
dups['set_three'] = pd.to_numeric(dups['set_three'])
dups['set_four'] = pd.to_numeric(dups['set_four'])
dups['routine_upload_date'] = pd.to_datetime(dups['routine_upload_date'])
# percent_change = dups[['set_three_weight']].pct_change()
# Group the exercises together and create a new cols that represent the percentage delta variation in percentages
dups.sort_values(['exercise', 'routine_upload_date'], inplace=True, ascending=[True, False])
dups['set_one_weight_delta'] = (dups.groupby('exercise')['set_one_weight'].apply(pd.Series.pct_change) + 1)
dups['set_two_weight_delta'] = (dups.groupby('exercise')['set_two_weight'].apply(pd.Series.pct_change) + 1)
dups['set_three_weight_delta'] = (dups.groupby('exercise')['set_three_weight'].apply(pd.Series.pct_change) + 1)
dups['set_four_weight_delta'] = (dups.groupby('exercise')['set_four_weight'].apply(pd.Series.pct_change) + 1)
dups['set_one_reps_delta'] = (dups.groupby('exercise')['set_one'].apply(pd.Series.pct_change) + 1)
dups['set_two_reps_delta'] = (dups.groupby('exercise')['set_two'].apply(pd.Series.pct_change) + 1)
dups['set_three_reps_delta'] = (dups.groupby('exercise')['set_three'].apply(pd.Series.pct_change) + 1)
dups['set_four_reps_delta'] = (dups.groupby('exercise')['set_four'].apply(pd.Series.pct_change) + 1)
print(dups.head())
I think this gets me the result(s) I want, would like someone to confirm

how can I have the different max of several lists in python

I want get different max from different list but the problem i get the same max,this is my code ,why problem in this code ,i have got the same max for the first list,what i do change for obtain a result max for different list:
def best(contactList_id,ntf_DeliveredCount):
maxtForEvryDay = []
yPredMaxForDay = 0
for day in range(1,8):
for marge in range(1,5):
result = predictUsingNewSample([[contactList_id,ntf_DeliveredCount,day,marge]])
if (result > yPredMaxForDay):
yPredMaxForDay = 0
yPredMaxForDay = result
maxtForEvryDay.append(yPredMaxForDay)
return maxtForEvryDay
best(contactList_id = 13.0,ntf_DeliveredCount = 5280.0)
result:
[1669.16010381]
[1708.32915255]
[1747.49820129]
[1786.66725003]
[1570.05500351]
[1609.22405225]
[1648.39310099]
[1687.56214973]
[1491.60792629]
[1510.11895195]
[1549.28800069]
[1588.45704943]
[1402.21845533]
[1420.73953501]
[1450.18290039]
[1489.35194913]
[1367.15490803]
[1356.21411426]
[1345.27532239]
[1390.24684884]
[1378.1190426]
[1367.17824883]
[1419.23588013]
[1486.78241686]
[1450.21261674]
[1516.04342599]
[1581.87423524]
[1647.7050445]
[array([1786.66725003]),
array([1786.66725003]),
array([1786.66725003]),
array([1786.66725003]),
array([1786.66725003]),
array([1786.66725003]),
array([1786.66725003])]
this is my fonction predictUsingNewSample(X_test)
def predictUsingNewSample(X_test):
#print(X_test)
# Load from file
with open("pickle_model.pkl", 'rb') as file:
pickle_model = pickle.load(file)
Ypredict = pickle_model.predict(X_test)
print(Ypredict)
return Ypredict
Try this:
def best(contactList_id,ntf_DeliveredCount):
maxtForEvryDay = []
for day in range(1,8):
yPredMaxForDay = 0
for marge in range(1,5):
result = predictUsingNewSample([[contactList_id,ntf_DeliveredCount,day,marge]])
if (result > yPredMaxForDay):
yPredMaxForDay = result
maxtForEvryDay.append(yPredMaxForDay)
return maxtForEvryDay
best(contactList_id = 13.0,ntf_DeliveredCount = 5280.0)
I think the problem actually comes from the fact that you never clean up your yPredMaxForDay variable for each day.

Value error in assigning to dataframe

I am assigning different data to one dataframe. And I had the following
ValueError: If using all scalar values, you must pass an index
I follow the question post by other Here
But it did not work out.
The following is my code. All you have to do is copy and paste the code to IDE.
import pandas as pd
import numpy as np
#Loading Team performance Data (ExpG (Home away)) For and against
epl_1718 = pd.read_csv("http://www.football-data.co.uk/mmz4281/1718/E0.csv")
epl_1718 = epl_1718[['HomeTeam','AwayTeam','FTHG','FTAG']]
epl_1718 = epl_1718.rename(columns={'FTHG': 'HomeGoals', 'FTAG': 'AwayGoals'})
Home_goal_avg = epl_1718['HomeGoals'].mean()
Away_goal_avg = epl_1718['AwayGoals'].mean()
Home_team_goals = epl_1718.groupby(['HomeTeam'])['HomeGoals'].sum()
Home_count = epl_1718.groupby(['HomeTeam'])['HomeTeam'].count()
Home_team_avg_goal = Home_team_goals/Home_count
Home_team_concede = epl_1718.groupby(['HomeTeam'])['AwayGoals'].sum()
EPL_Home_average_score = epl_1718['HomeGoals'].mean()
EPL_Home_average_conc = epl_1718['HomeGoals'].mean()
Home_team_avg_conc = Home_team_concede/Home_count
Away_team_goals = epl_1718.groupby(['AwayTeam'])['AwayGoals'].sum()
Away_count = epl_1718.groupby(['AwayTeam'])['AwayTeam'].count()
Away_team_avg_goal = Away_team_goals/Away_count
Away_team_concede = epl_1718.groupby(['AwayTeam'])['HomeGoals'].sum()
EPL_Away_average_score = epl_1718['AwayGoals'].mean()
EPL_Away_average_conc = epl_1718['HomeGoals'].mean()
Away_team_avg_conc = Away_team_concede/Away_count
Home_attk_sth = Home_team_avg_goal/EPL_Home_average_score
Home_attk_sth = Home_attk_sth.sort_index().reset_index()
Home_def_sth = Home_team_avg_conc/EPL_Home_average_conc
Home_def_sth = Home_def_sth .sort_index().reset_index()
Away_attk_sth = Away_team_avg_goal/EPL_Away_average_score
Away_attk_sth = Away_attk_sth .sort_index().reset_index()
Away_def_sth = Away_team_avg_conc/EPL_Away_average_conc
Away_def_sth = Away_def_sth.sort_index().reset_index()
Home_def_sth
HomeTeam = epl_1718['HomeTeam'].drop_duplicates().sort_index().reset_index().set_index('HomeTeam')
AwayTeam = epl_1718['AwayTeam'].drop_duplicates().sort_index().reset_index().sort_values(['AwayTeam']).set_index(['AwayTeam'])
#HomeTeam = HomeTeam.sort_index().reset_index()
Team = HomeTeam.append(AwayTeam).drop_duplicates()
Data = pd.DataFrame({"Team":Team,
"Home_attkacking":Home_attk_sth,
"Home_def": Home_def_sth,
"Away_Attacking":Away_attk_sth,
"Away_def":Away_def_sth,
"EPL_Home_avg_score":EPL_Home_average_score,
"EPL_Home_average_conc":EPL_Home_average_conc,
"EPL_Away_average_score":EPL_Away_average_score,
"EPL_Away_average_conc":EPL_Away_average_conc},
columns =['Team','Home_attacking','Home_def','Away_attacking','Away_def',
'EPL_Home_avg_score','EPL_Home_avg_conc','EPL_Away_avg_score','EPL_Away_average_conc'])
In this code, what I am trying to do is to get average goal score per team per game, average goals conceded per team per game.
And then I am calculating other performance factors such as attacking strength, defensive strenght etc.
I have to paste the code as if i use example, creating data frame would work.
Thanks for understanding.
Thanks in advance for the advice too.
The format (or the columns) of final data frame will look like as follow:
Team Home Attacking Home Defensive Away attacking away defensive
and so on as mentioned in the data frame.
It means, there will be only 20 teams under team columns
The shape of dataframe will be ( 20,9)
Regards,
Zep
Here main idea is remove reset_index for Series with index by teams, so variable Team is not necessary and is created as last step by reset_index. Also be carefull with columns names in DataFrame constructor, if there are changed like EPL_Home_average_conc in dictionary and then EPL_Home_avg_conc get NaNs columns:
Home_team_goals = epl_1718.groupby(['HomeTeam'])['HomeGoals'].sum()
Home_count = epl_1718.groupby(['HomeTeam'])['HomeTeam'].count()
Home_team_avg_goal = Home_team_goals/Home_count
Home_team_concede = epl_1718.groupby(['HomeTeam'])['AwayGoals'].sum()
EPL_Home_average_score = epl_1718['HomeGoals'].mean()
EPL_Home_average_conc = epl_1718['HomeGoals'].mean()
Home_team_avg_conc = Home_team_concede/Home_count
Away_team_goals = epl_1718.groupby(['AwayTeam'])['AwayGoals'].sum()
Away_count = epl_1718.groupby(['AwayTeam'])['AwayTeam'].count()
Away_team_avg_goal = Away_team_goals/Away_count
Away_team_concede = epl_1718.groupby(['AwayTeam'])['HomeGoals'].sum()
EPL_Away_average_score = epl_1718['AwayGoals'].mean()
EPL_Away_average_conc = epl_1718['HomeGoals'].mean()
Away_team_avg_conc = Away_team_concede/Away_count
#removed reset_index
Home_attk_sth = Home_team_avg_goal/EPL_Home_average_score
Home_attk_sth = Home_attk_sth.sort_index()
Home_def_sth = Home_team_avg_conc/EPL_Home_average_conc
Home_def_sth = Home_def_sth .sort_index()
Away_attk_sth = Away_team_avg_goal/EPL_Away_average_score
Away_attk_sth = Away_attk_sth .sort_index()
Away_def_sth = Away_team_avg_conc/EPL_Away_average_conc
Away_def_sth = Away_def_sth.sort_index()
Data = pd.DataFrame({"Home_attacking":Home_attk_sth,
"Home_def": Home_def_sth,
"Away_attacking":Away_attk_sth,
"Away_def":Away_def_sth,
"EPL_Home_average_score":EPL_Home_average_score,
"EPL_Home_average_conc":EPL_Home_average_conc,
"EPL_Away_average_score":EPL_Away_average_score,
"EPL_Away_average_conc":EPL_Away_average_conc},
columns =['Home_attacking','Home_def','Away_attacking','Away_def',
'EPL_Home_average_score','EPL_Home_average_conc',
'EPL_Away_average_score','EPL_Away_average_conc'])
#column from index
Data = Data.rename_axis('Team').reset_index()
print (Data)

Resources