Python pyodbc fetchmany() how to select out put to update query - python-3.x

I have code to fetchmany() that will output eg:10 records
And i have added iterating value for each 0 1 2 3 4 5 for print statement , now i want user input 0 or 1 and it should select column. For those input so i can update sql record for those column
cur.execute("select events.SERIALNUM, emp.LASTNAME, emp.SSNO,
events.EVENT_TIME_UTC from AccessControl.dbo.emp,
AccessControl.dbo.events where emp.id = events.empid and emp.SSNO=?
order by EVENT_TIME_UTC desc ", empid)
rows = cur.fetchmany(att_date)
n = 0
for row in rows :
event_date = row.EVENT_TIME_UTC
utc = event_date.replace(tzinfo=from_zone)
utc_to_local = utc.astimezone(to_zone)
local_time = utc_to_local.strftime('%H:%M:%S')
att_date = utc_to_local.strftime('%d:%m:%y')
print (n, row.SERIALNUM, row.LASTNAME, row.SSNO, att_date, local_time)
n = n + 1
seri_al = input("Copy And Past the serial number u want to modifiy: ")
this will output following Data
0 1500448188 FIRST NAME 03249 2017-07-19 17:01:17
1 1500448187 FIRST NAME 03249 2017-07-19 17:01:15
Eg:
seri_al = input("Copy And Past the serial number u want to modifiy: ")
instead of copying and pasting '1500448188' these numbers I want the user to only enter '0' and map that one and update sql query as for where clause serial number.

It appears that you already know how to use input to prompt for the user's choice. The only piece you are missing is to add items to a dictionary as you loop through the rows. Here is a slightly abstracted example:
rows = [('1500448188',),('1500448187',)] # test data
selections = dict()
n = 0
for row in rows:
selections[n] = row[0]
print(n, repr(row[0]))
n += 1
select = input("Enter the index (0, 1, ...) you want to select: ")
selected_key = selections[int(select)]
print("You selected " + repr(selected_key))
which prints
0 '1500448188'
1 '1500448187'
Enter the index (0, 1, ...) you want to select: 1
You selected '1500448187'

Related

how to insert multiple rows into sqllite3 database

I am trying to select rows and fetch them from the DB table and then insert them into a list so I can insert all of the rows at once into the database, but I got an error.
def paid_or_returned_buyingchecks(self):
date = datetime.now()
now = date.strftime('%Y-%m-%d')
self.tenlistchecks=[]
self.con = sqlite3.connect('car dealership.db')
self.cursorObj = self.con.cursor()
self.dashboard_buying_checks_dates = self.cursorObj.execute("select id, paymentdate , paymentvalue, car ,sellername from cars_buying_checks where nexttendays=?",(now,))
self.dashboard_buying_checks_dates_output = self.cursorObj.fetchall()
self.tenlistchecks.append(self.dashboard_buying_checks_dates_output)
print(self.tenlistchecks)
self.dashboard_buying_checks_dates = self.cursorObj.executemany("insert into paid_buying_checks VALUES(?,?,?,?,?)",[self.tenlistchecks])
self.con.commit()
but I got an error :
[[(120, '21-08-2022', '1112', 'Alfa Romeo', 'james'), (122, '21-08-2022', '465', 'Buick', 'daniel '), (123, '21-08-2022', '789', 'Buick', 'daniel ')]]
self.dashboard_buying_checks_dates = self.cursorObj.executemany(
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 5, and there are 1 supplied.
self.cursorObj.fetchall() returns a list of tuples, which is what you need to feed to executemany, so
self.cursorObj.executemany("insert into paid_buying_checks VALUES(?,?,?,?,?)",self.tenlistchecks)
not
self.cursorObj.executemany("insert into paid_buying_checks VALUES(?,?,?,?,?)",[self.tenlistchecks])

Calculate percentage change in pandas with rows that contain the same values

I am using Pandas to calculate percentage change(s) between values that occur more than once in the column of interest.
I want to compare the values of last weeks workout provided they're the same exercise type to get the percentage change of (weight used, reps accomplished )
I am able to get the percentages of all the rows which is halfway what I want but the conditional part is missing - so only get the percentages if the exercise_name is of the same value as we want to compare how we improve on a weekly, bi-weekly basis.
ids = self.user_data["exercise"].fillna(0)
dups = self.user_data[ids.isin(ids[ids.duplicated()])].sort_values("exercise")
dups['exercise'] = dups['exercise'].astype(str)
dups['set_one_weight'] = pd.to_numeric(dups['set_one_weight'])
dups['set_two_weight'] = pd.to_numeric(dups['set_two_weight'])
dups['set_three_weight'] = pd.to_numeric(dups['set_three_weight'])
dups['set_four_weight'] = pd.to_numeric(dups['set_four_weight'])
dups['set_one'] = pd.to_numeric(dups['set_one'])
dups['set_two'] = pd.to_numeric(dups['set_two'])
dups['set_three'] = pd.to_numeric(dups['set_three'])
dups['set_four'] = pd.to_numeric(dups['set_four'])
**percent_change = dups[['set_three_weight']].pct_change()**
the last line gets the percentage change for all the rows for column set_three_weight but is unable to do what I want above which is find rows with same name and obtain the percentage change.
UPDATE
Using Group By Solution
ids = self.user_data["exercise"].fillna(0)
dups = self.user_data[ids.isin(ids[ids.duplicated()])].sort_values("exercise")
dups['exercise'] = dups['exercise'].astype(str)
dups['set_one_weight'] = pd.to_numeric(dups['set_one_weight'])
dups['set_two_weight'] = pd.to_numeric(dups['set_two_weight'])
dups['set_three_weight'] = pd.to_numeric(dups['set_three_weight'])
dups['set_four_weight'] = pd.to_numeric(dups['set_four_weight'])
dups['set_one'] = pd.to_numeric(dups['set_one'])
dups['set_two'] = pd.to_numeric(dups['set_two'])
dups['set_three'] = pd.to_numeric(dups['set_three'])
dups['set_four'] = pd.to_numeric(dups['set_four'])
dups['routine_upload_date'] = pd.to_datetime(dups['routine_upload_date'])
# percent_change = dups[['set_three_weight']].pct_change()
# Group the exercises together and create a new cols that represent the percentage delta variation in percentages
dups.sort_values(['exercise', 'routine_upload_date'], inplace=True, ascending=[True, False])
dups['set_one_weight_delta'] = (dups.groupby('exercise')['set_one_weight'].apply(pd.Series.pct_change) + 1)
dups['set_two_weight_delta'] = (dups.groupby('exercise')['set_two_weight'].apply(pd.Series.pct_change) + 1)
dups['set_three_weight_delta'] = (dups.groupby('exercise')['set_three_weight'].apply(pd.Series.pct_change) + 1)
dups['set_four_weight_delta'] = (dups.groupby('exercise')['set_four_weight'].apply(pd.Series.pct_change) + 1)
dups['set_one_reps_delta'] = (dups.groupby('exercise')['set_one'].apply(pd.Series.pct_change) + 1)
dups['set_two_reps_delta'] = (dups.groupby('exercise')['set_two'].apply(pd.Series.pct_change) + 1)
dups['set_three_reps_delta'] = (dups.groupby('exercise')['set_three'].apply(pd.Series.pct_change) + 1)
dups['set_four_reps_delta'] = (dups.groupby('exercise')['set_four'].apply(pd.Series.pct_change) + 1)
print(dups.head())
I think this gets me the result(s) I want, would like someone to confirm

Using STUFF Function

I've this table TableA that have these fields: [intIdEntidad],[intIdEjercicio],[idTipoGrupoCons]. The tableA look like for idTipoGrupoCons = 16 this image
enter image description here
I'm trying to use STUFF function to show the column intIdEjercicio separated by coma, something like this;
enter image description here
This is query I'm using to obtain result the above image:
SELECT DISTINCT o.idTipoGrupoCons, o.intIdEntidad, ejercicios= STUFF((
SELECT ', ' + CONVERT(VARCHAR,a.intIdEjercicio)
FROM dbo.[tbEntidades_Privadas_InfoAdicionalGrupo] AS a
WHERE a.idTipoGrupoCons = 16
FOR XML PATH, TYPE).value(N'.[1]', N'varchar(max)'), 1, 2, '')
FROM [tbEntidades_Privadas_InfoAdicionalGrupo] AS o
JOIN tbEntidades_Privadas p On O.intIdEntidad = p.intIdEntidad
WHERE o.idTipoGrupoCons = 16
The result isn't correct, because I execute this query for idTipoGrupoCons = 16
SELECT [idTipoGrupoCons], [intIdEntidad],[intIdEjercicio]
FROM [tbEntidades_Privadas_InfoAdicionalGrupo] A
WHERE A.idTipoGrupoCons = 16
The result is this
enter image description here
It's means that for intIdEntidad = 50 intIdEjercicio is just 7 and for intIdEntidad = 45 intIdEjercicio = 2 and 4
I suppose that the problem is that I need to add a subquery to or a function into STUFF or in the outer WHERE to add condition to intIdEntidad each time to call STUFF function.
I've read about the use of CROSS APPLY and perhaps it can be used to solve the problem
Here is the answer.
The problem was that need to join tableA with the same table into the STUFF function. At the end the query look like this:
SELECT t1.idTipoGrupoCons, t1.intIdEntidad,
,ejercicios = STUFF(
(SELECT ', ' + t3.Ejercicio
FROM [tbEntidades_Privadas_InfoAdicionalGrupo] t2
JOIN tbMtoNoRegistro_Ejercicios t3 ON t2.intIdEjercicio = e.intEjercicio
WHERE t2.idTipoGrupoCons = t1.idTipoGrupoCons
AND t2.intIdEntidad = t1.intIdEntidad
ORDER BY t3.Ejercicio
FOR XML PATH ('')
)
,1,2,'')
FROM [tbEntidades_Privadas_InfoAdicionalGrupo] t1
JOIN tbEntidades_Privadas p ON t1.intIdEntidad = p.intIdEntidad
WHERE t1.idTipoGrupoCons = 17
GROUP BY t1.idTipoGrupoCons,t1.intIdEntidad, p.strDenominacionSocial

Merging data frames by date in python

I have a 8 dataframes which contain date and a stock return. They are of different length (some 5 years, some 8, etc.). Some of them are randomly non-defined (depends on what the user do before - which countries she chooses). What I want to do, is to merge by date only those dataframes which have data and to find correlation between the columns. Please, advise me this issue. I use
the command below to merge data frames, but because some of them do not exist (this is random) I can not merge them.
merged =pd.concat([germany_return, france_return, usa_return, hongkong_return, india_return, japan_return, england_return, china_return], axis=1)
Below I present the last part of the code:
#Define Stock Price colum
stock_germany = data_germany['Settle']
stock_france = data_france['Settle']
stock_usa = data_usa['Value']
stock_hongkong = data_hongkong['Last Traded']
stock_india = data_india['Close']
stock_japan = data_india['Close']
stock_england = data_england['Settle']
stock_china = data_china['Value']
#Calculate Monthly returns
germany_return = stock_germany.pct_change(21)
france_return = stock_france.pct_change(21)
usa_return = stock_usa.pct_change(21)
hongkong_return = stock_hongkong.pct_change(21)
india_return = stock_india.pct_change(21)
japan_return = stock_japan.pct_change(21)
england_return = stock_england.pct_change(21)
china_return = stock_china.pct_change(21)
merged =pd.concat([germany_return, france_return, usa_return, hongkong_return, india_return, japan_return, england_return, china_return], axis=1)

Unknown column added in user input form

I have a simple data entry form that writes the inputs to a csv file. Everything seems to be working ok, except that there are extra columns being added to the file in the process somewhere, seems to be during the user input phase. Here is the code:
import pandas as pd
#adds all spreadsheets into one list
Batteries= ["MAT0001.csv","MAT0002.csv", "MAT0003.csv", "MAT0004.csv",
"MAT0005.csv", "MAT0006.csv", "MAT0007.csv", "MAT0008.csv"]
#User selects battery to log
choice = (int(input("Which battery? (1-8):")))
def choosebattery(c):
done = False
while not done:
if(c in range(1,9)):
return Batteries[c]
done = True
else:
print('Sorry, selection must be between 1-8')
cfile = choosebattery(choice)
cbat = pd.read_csv(cfile)
#Collect Cycle input
print ("Enter Current Cycle")
response = None
while response not in {"Y", "N", "y", "n"}:
response = input("Please enter Y or N: ")
cy = response
#Charger input
print ("Enter Current Charger")
response = None
while response not in {"SC-G", "QS", "Bosca", "off", "other"}:
response = input("Please enter one: 'SC-G', 'QS', 'Bosca', 'off', 'other'")
if response == "other":
explain = input("Please explain")
ch = response + ":" + explain
else:
ch = response
#Location
print ("Enter Current Location")
response = None
while response not in {"Rack 1", "Rack 2", "Rack 3", "Rack 4", "EV001", "EV002", "EV003", "EV004", "Floor", "other"}:
response = input("Please enter one: 'Rack 1 - 4', 'EV001 - 004', 'Floor' or 'other'")
if response == "other":
explain = input("Please explain")
lo = response + ":" + explain
else:
lo = response
#Voltage
done = False
while not done:
choice = (float(input("Enter Current Voltage:")))
modchoice = choice * 10
if(modchoice in range(500,700)):
vo = choice
done = True
else:
print('Sorry, selection must be between 50 and 70')
#add inputs to current battery dataframe
log = pd.DataFrame([[cy,ch,lo,vo]],columns=["Cycle", "Charger", "Location", "Voltage"])
clog = pd.concat([cbat,log], axis=0)
clog.to_csv(cfile, index = False)
pd.read_csv(cfile)
And I receive:
Out[18]:
Charger Cycle Location Unnamed: 0 Voltage
0 off n Floor NaN 50.0
Where is the "Unnamed" column coming from?
There's an 'unnamed' column coming from your csv. The reason most likely is that the lines in your input csv files end with a comma (i.e. your separator), so pandas interprets that as an additional (nameless) column. If that's the case, check whether your lines end with your separator. For example, if your files are separated by commas:
Column1,Column2,Column3,
val_11, val12, val12,
...
Into:
Column1,Column2,Column3
val_11, val12, val12
...
Alternatively, try specifying the index column explicitly as in this answer. I believe some of the confusion stems from pandas concat reordering your columns .

Resources