Odoo 12 : How to prevent default field method to be executed - python-3.x

I scheduled an cron that execute every 1st of the month, the purpose is to allocate leave for all employee according to their tags. here is sample of my code :
for leave in leave_type_ids:
for employee_tag in employee_tag_ids:
values = {
'name': 'Allocation mensuelle %s %s' % (now.strftime('%B'), now.strftime('%Y')),
'holiday_status_id': leave.id,
'number_of_days': employee_tag.allocation,
'holiday_type': 'category',
'category_id': employee_tag.id,
}
try:
self.create(values).action_approve()
except Exception as e:
_logger.critical(e)
I want to point out that self is instance of 'hr.leave.allocation'.
The problem is when I create the record, the field employee_id is automatically fill with the user/employee OdooBot (the one who executed the program in the cron) and that is not all, the employee OdooBot was allocated a leaves.
This behavior is due to those codes in odoo native modules :
def _default_employee(self):
return self.env.context.get('default_employee_id') or self.env['hr.employee'].search([('user_id', '=', self.env.uid)], limit=1)
employee_id = fields.Many2one(
'hr.employee', string='Employee', index=True, readonly=True,
states={'draft': [('readonly', False)], 'confirm': [('readonly', False)]}, default=_default_employee, track_visibility='onchange')
So my question is how to prevent this when it's the cron and set it to normal when it's in Form view?
The field "employé" should be empty here (in image below), because it is an allocation by tag.

You must loop over hr.employee because then you can do either of the following:
self.with_context({'default_employee_id': employee.id}).create(...)
OR
self.sudo(employee.id).create(...)

Related

Odoo 13: Creating invoice from purchase order in odoo via api

I am brand new to odoo! On odoo 13 EE I am trying to create and confirm a vendor bill after importing a purchase order and the item receipts. I can create an invoice directly, but haven't been able to link that to the PO/receipt?
Sadly under purchase.order the method action_create_invoice seems hidden from the API
order_id = PurchaseOrder.create(po)
purchaseorder = PurchaseOrder.browse([order_id])
print("Before validating:", purchaseorder.name, purchaseorder.state) # draft
odoo.env.context['check_move_validity'] = True
purchaseorder.button_confirm()
purchaseorder = PurchaseOrder.browse([order_id])
picking_count = purchaseorder.picking_count
print("After Post:", purchaseorder.name, purchaseorder.state, "picking_count = ", purchaseorder.picking_count)
if picking_count == 0:
print("Nothing to receive. Straight to to Billing.") # ok so far
tryme = purchaseorder.action_view_invoice()
## Error => odoorpc.error.RPCError: type object 'purchase.order' has no attribute 'action_create_invoice'
SO I tried overriding/extending this way
class PurchaseOrder(models.Model):
_inherit = 'purchase.order'
#api.model
def create_invoice(self, context=None):
# try 1 => odoorpc.error.RPCError: 'super' object has no attribute # 'action_create_invoice'
rtn = super().action_create_invoice(self)
# try2 => odoorpc.error.RPCError: name 'action_create_invoice' is # not defined
# rtn = action_create_invoice(self)
# try3 => Error %s 'super' object has no attribute ' # action_create_invoice'
# rtn = super(models.Model, self).action_create_invoice(self)
return rtn
I hope somebody can suggest a solution! Thank you.
Please dont customize it without having a functional knowledge in odoo. In odoo, if you go to purchase settings, you can find billing options under invoicing where you can find 2 options, ordered quantity and received quantity. if it is ordered quantity, then you can create invoice after confirming the Purchase order. if it is received quantity, then after confirming the purchase order, a incoming shipment will be created and after the incoming shipment is processed, you can find the create invoice button in purchase order
If you can do it from the browser client, than you should just look what API commands the browser sends to the odoo server (in Chrome by enabling the debug view by pressing F12, and looking in the network tab), so that you just need to copy that communication.

Dump series back into InfluxDB after querying with replaced field value

Scenario
I want to send data to an MQTT Broker (Cloud) by querying measurements from InfluxDB.
I have a field in the schema which is called status. It can either be 1 or 0. status=0 indicated that series has not been sent to the cloud. If I get an acknowlegdment from the MQTT Broker then I wish to rewrite the query back into the database with status=1.
As mentioned in FAQs for InfluxDB regarding Duplicate data If the information has the same timestamp as the previous query but with a different field value => then the update field will be shown.
In order to test this I created the following:
CREATE DATABASE dummy
USE dummy
INSERT meas_1, type=t1, status=0,value=123 1536157064275338300
query:
SELECT * FROM meas_1
provides
time status type value
1536157064275338300 0 t1 234
now if I want to overwrite the series I do the following:
INSERT meas_1, type=t1, status=1,value=123 1536157064275338300
which will overwrite the series
time status type value
1536157064275338300 1 t1 234
(Note: this is not possible via Tags currently in InfluxDB)
Usage
Query some information using the client with "status"=0.
Restructure JSON to be sent to the cloud
Send the information to cloud
If successful then write the output from Step 1. back into the DB but with status=1.
I am using the InfluxDBClient Python3 to create the Application (MQTT + InfluxDB)
Within the write_points API there is a parameter which mentions batch_size which require int as input.
I am not sure how can I use this with the Application that I want. Can someone guide me with this or with the Schema of the DB so that I can upload actual and non-redundant information to the cloud ?
The batch_size is actually the length of the list of the measurements that needs to passed to write_points.
Steps
Create client and query from measurement (here, we query gps information)
client = InfluxDBClient(database='dummy')
op = client.query('SELECT * FROM gps WHERE "status"=0', epoch='ns')
Make the ResultSet into a list:
batch = list(op.get_points('gps'))
create an empty list for update
updated_batch = []
parse through each measurement and change the status flag to 1. Note, default values in InfluxDB are float
for each in batch:
new_mes = {
'measurement': 'gps',
'tags': {
'type': 'gps'
},
'time': each['time'],
'fields': {
'lat': float(each['lat']),
'lon': float(each['lon']),
'alt': float(each['alt']),
'status': float(1)
}
}
updated_batch.append(new_mes)
Finally dump the points back via the client with batch_size as the length of the updated_batch
client.write_points(updated_batch, batch_size=len(updated_batch))
This overwrites the series because it contains the same timestamps with status field set to 1

Cron error old_api while running Schedule Action odoo 8

I have created a function which is working fine if i use it through the view. but it's not working on schedule action, in openerp log show the following error.
TypeError: old_api() takes at least 4 arguments (3 given)
My module
class account_invoice(models.Model):
_name = 'account.invoice'
_rec_name = 'invoice_number'
#api.multi
def create_invoice(self):
id = self.id
amount = 0
journal = self.env['journal.entry']
for credit in self.invoice_line:
fee = credit.amount * credit.qty
if credit.account.parent.type.name == "Revenue":
journal.sudo().create({'account': credit.account.id,
'credit': fee,
'student_id' : self.student_id.id})
For method to work as a scheduled action you should call it with #api.model decorator instead of #api.multi, which works with view buttons.

Why doesn't psycopg2 allow us to open multiple server-side cursors in the same connection?

I am curious that why psycopg2 doesn't allow opening multiple server-side cursors (http://initd.org/psycopg/docs/usage.html#server-side-cursors) in the same connection. I got this problem recently and I have to solve it by replacing the second cursor by a client-side cursor. But I still want to know if there is any way to do that.
For example, I have these 2 tables on Amazon Redshift:
CREATE TABLE tbl_account (
acctid varchar(100),
regist_day date
);
CREATE TABLE tbl_my_artist (
user_id varchar(100),
artist_id bigint
);
INSERT INTO tbl_account
(acctid, regist_day)
VALUES
('TEST0000000001', DATE '2014-11-23'),
('TEST0000000002', DATE '2014-11-23'),
('TEST0000000003', DATE '2014-11-23'),
('TEST0000000004', DATE '2014-11-23'),
('TEST0000000005', DATE '2014-11-25'),
('TEST0000000006', DATE '2014-11-25'),
('TEST0000000007', DATE '2014-11-25'),
('TEST0000000008', DATE '2014-11-25'),
('TEST0000000009', DATE '2014-11-26'),
('TEST0000000010', DATE '2014-11-26'),
('TEST0000000011', DATE '2014-11-24'),
('TEST0000000012', DATE '2014-11-24')
;
INSERT INTO tbl_my_artist
(user_id, artist_id)
VALUES
('TEST0000000001', 2000011247),
('TEST0000000001', 2000157208),
('TEST0000000001', 2000002648),
('TEST0000000002', 2000383724),
('TEST0000000003', 2000002546),
('TEST0000000003', 2000417262),
('TEST0000000004', 2000076873),
('TEST0000000004', 2000417266),
('TEST0000000005', 2000077991),
('TEST0000000005', 2000424268),
('TEST0000000005', 2000168784),
('TEST0000000006', 2000284581),
('TEST0000000007', 2000284581),
('TEST0000000007', 2000000642),
('TEST0000000008', 2000268783),
('TEST0000000008', 2000284581),
('TEST0000000009', 2000088635),
('TEST0000000009', 2000427808),
('TEST0000000010', 2000374095),
('TEST0000000010', 2000081797),
('TEST0000000011', 2000420006),
('TEST0000000012', 2000115887)
;
I want to select from those 2 tables, then do something with query result.
I use 2 server-side cursors because I need 2 nested loops in my query. I want to use server-side cursor because the result can be very huge.
I use fetchmany() instead of fetchall() because I'm running on a single-node cluster.
Here is my code:
import psycopg2
from psycopg2.extras import DictCursor
conn = psycopg2.connect('connection parameters')
cur1 = conn.cursor(name='cursor1', cursor_factory=DictCursor)
cur2 = conn.cursor(name='cursor2', cursor_factory=DictCursor)
cur1.execute("""SELECT acctid, regist_day FROM tbl_account
WHERE regist_day <= '2014-11-25'
ORDER BY 1""")
for record1 in cur1.fetchmany(50):
cur2.execute("""SELECT user_id, artist_id FROM tbl_my_artist
WHERE user_id = '%s'
ORDER BY 1""" % (record1["acctid"]))
for record2 in cur2.fetchmany(50):
print '(acctid, artist_id, regist_day): (%s, %s, %s)' % (
record1["acctid"], record2["artist_id"], record1["regist_day"])
# do something with these values
conn.close()
When running, I got an error:
Traceback (most recent call last):
File "C:\Users\MLD1\Desktop\demo_cursor.py", line 20, in <module>
for record2 in cur2.fetchmany(50):
File "C:\Python27\lib\site-packages\psycopg2\extras.py", line 72, in fetchmany
res = super(DictCursorBase, self).fetchmany(size)
InternalError: opening multiple cursors from within the same client connection is not allowed.
That error occured at line 20, when I tried to fetch result from the second cursor.
An answer four years later, but it is possible to have more than one cursor open from the same connection. (It may be that the library was updated to fix the problem above.)
The caveat is that you are only allowed to call execute() only once using a named cursor, so if you reuse one of the cursors in the fetchmany loop you'd need to either remove the name or create another "anonymous" cursor.

Exporting from MS Excel to MS Access with intermediate processing

I have an application which produces reports in Excel (.XLS) format. I need to append the data from these reports to an existing table in a MS Access 2010 database. A typical record is:
INC000000004154 Closed Cbeebies BBC Childrens HQ6 monitor wall dropping out. HQ6 P3 3/7/2013 7:03:01 PM 3/7/2013 7:03:01 PM 3/7/2013 7:14:15 PM The root cause of the problem was the power supply to the PC which was feeding the monitor. HQ6 Monitor wall dropping out. BBC Third Party Contractor supply this equipment.
The complication is that I need to do some limited processing on the data. Viz
Specifically I need to do a couple of lookups converting names to numbers and also parse a date-string (the report for some reason puts the dates in to the spreadsheet in text format rather than date format).
Now I could do this in Python using XLRD/XLWT but would much prefer to do it in Excel or Access. Does anyone have any advice on a good way to approach this? I would very much prefer NOT to use VBA so could I do something like record an MS Excel macro and then execute that macro on the newly created XLS file?
You can directly import some Excel data into MS Access, but if your requirement is to do some processing because then I don't see how you will be able to achieve that without:
an ETL application, like Pentaho or Talend or others.
That will certainly be like using a hammer to crush an ant though.
some other external data processing pipeline, in Python or some other programming language.
VBA (wether through macros or hand coded).
VBA has been really good at doing that sort of things in Access for literally decades.
Since you are using Excel and Access, staying within that realm looks like the best solution for solving your issue.
Just use queries:
You import the data without transformation to a table whose sole purpose is to accommodate the data from Excel; then you create queries from that raw data to add the missing information and massage the data before appending the result into your final destination table.
That solution has the advantage of letting you create simple steps in Access that you can easily record using macros.
I asked this question some time ago and decided it would be easier to do it in Python. Gord asked me to share, and here it is (sorry about the delay, other projects took priority for a while).
"""
Routine to migrate the S7 data from MySQL to the new Access
database.
We're using the pyodbc libraries to connect to Microsoft Access
Note that there are 32- and 64-bit versions of these libraries
available but in order to work the word-length for pyodbc and by
implication Python and all its associated compiled libraries must
match that of MS Access. Which is an arse as I've just had to
delete my 64-bit installation of Python and replace it and all
the libraries with the 32-bit version.
Tim Greening-Jackson 08 May 2013 (timATgreening-jackson.com)
"""
import pyodbc
import re
import datetime
import tkFileDialog
from Tkinter import *
class S7Incident:
"""
Class containing the records downloaded from the S7.INCIDENTS table
"""
def __init__(self, id_incident, priority, begin, acknowledge,
diagnose, workaround,fix, handoff, lro, nlro,
facility, ctas, summary, raised, code):
self.id_incident=unicode(id_incident)
self.priority = {u'P1':1, u'P2':2, u'P3':3, u'P4':4, u'P5':5} [unicode(priority.upper())]
self.begin = begin
self.acknowledge = acknowledge
self.diagnose = diagnose
self.workaround = workaround
self.fix = fix
self.handoff = True if handoff else False
self.lro = True if lro else False
self.nlro = True if nlro else False
self.facility = unicode(facility)
self.ctas = ctas
self.summary = "** NONE ***" if type(summary) is NoneType else summary.replace("'","")
self.raised = raised.replace("'","")
self.code = 0 if code is None else code
self.production = None
self.dbid = None
def __repr__(self):
return "[{}] ID:{} P{} Prod:{} Begin:{} A:{} D:+{}s W:+{}s F:+{}s\nH/O:{} LRO:{} NLRO:{} Facility={} CTAS={}\nSummary:'{}',Raised:'{}',Code:{}".format(
self.id_incident,self.dbid, self.priority, self.production, self.begin,
self.acknowledge, self.diagnose, self.workaround, self.fix,
self.handoff, self.lro, self.nlro, self.facility, self.ctas,
self.summary, self.raised, self.code)
def ProcessIncident(self, cursor, facilities, productions):
"""
Produces the SQL necessary to insert the incident in to the Access
database, executes it and then gets the autonumber ID (dbid) of the newly
created incident (this is used so LRO, NRLO CTAS and AD1 can refer to
their parent incident.
If the incident is classed as LRO, NLRO, CTAS then the appropriate
record is created. Returns the dbid.
"""
if self.raised.upper() in productions:
self.production = productions[self.raised.upper()]
else:
self.production = 0
sql="""INSERT INTO INCIDENTS (ID_INCIDENT, PRIORITY, FACILITY, BEGIN,
ACKNOWLEDGE, DIAGNOSE, WORKAROUND, FIX, HANDOFF, SUMMARY, RAISED, CODE, PRODUCTION)
VALUES ('{}', {}, {}, #{}#, {}, {}, {}, {}, {}, '{}', '{}', {}, {})
""".format(self.id_incident, self.priority, facilities[self.facility], self.begin,
self.acknowledge, self.diagnose, self.workaround, self.fix,
self.handoff, self.summary, self.raised, self.code, self.production)
cursor.execute(sql)
cursor.execute("SELECT ##IDENTITY")
self.dbid = cursor.fetchone()[0]
if self.lro:
self.ProcessLRO(cursor, facilities[self.facility])
if self.nlro:
self.ProcessNLRO(cursor, facilities[self.facility])
if self.ctas:
self.ProcessCTAS(cursor, facilities[self.facility], self.ctas)
return self.dbid
def ProcessLRO(self, cursor, facility):
sql = "INSERT INTO LRO (PID, DURATION, FACILITY) VALUES ({}, {}, {})"\
.format(self.dbid, self.workaround, facility)
cursor.execute(sql)
def ProcessNLRO(self, cursor, facility):
sql = "INSERT INTO NLRO (PID, DURATION, FACILITY) VALUES ({}, {}, {})"\
.format(self.dbid, self.workaround, facility)
cursor.execute(sql)
def ProcessCTAS(self, cursor, facility, code):
sql = "INSERT INTO CTAS (PID, DURATION, FACILITY, CODE) VALUES ({}, {}, {}, {})"\
.format(self.dbid, self.workaround, facility, self.ctas)
cursor.execute(sql)
class S7AD1:
"""
S7.AD1 records.
"""
def __init__(self, id_ad1, date, ref, commentary, adjustment):
self.id_ad1 = id_ad1
self.date = date
self.ref = unicode(ref)
self.commentary = unicode(commentary)
self.adjustment = float(adjustment)
self.pid = 0
self.production = 0
def __repr__(self):
return "[{}] Date:{} Parent:{} PID:{} Amount:{} Commentary: {} "\
.format(self.id_ad1, self.date.strftime("%d/%m/%y"), self.ref, self.pid, self.adjustment, self.commentary)
def SetPID(self, pid):
self.pid = pid
def SetProduction(self, p):
self.production = p
def Process(self, cursor):
sql = "INSERT INTO AD1 (pid, begin, commentary, production, adjustment) VALUES ({}, #{}#, '{}', {}, {})"\
.format(self.pid, self.date.strftime("%d/%m/%y"), self.commentary, self.production, self.adjustment)
cursor.execute(sql)
class S7Financial:
"""
S7 monthly financial summary of income and penalties from S7.FINANCIALS table.
These are identical in the new database
"""
def __init__(self, month, year, gco, cta, support, sc1, sc2, sc3, ad1):
self.begin = datetime.date(year, month, 1)
self.gco = float(gco)
self.cta = float(cta)
self.support = float(support)
self.sc1 = float(sc1)
self.sc2 = float(sc2)
self.sc3 = float(sc3)
self.ad1 = float(ad1)
def __repr__(self):
return "Period: {} GCO:{:.2f} CTA:{:.2f} SUP:{:.2f} SC1:{:.2f} SC2:{:.2f} SC3:{:.2f} AD1:{:.2f}"\
.format(self.start.strftime("%m/%y"), self.gco, self.cta, self.support, self.sc1, self.sc2, self.sc3, self.ad1)
def Process(self, cursor):
"""
Insert in to FINANCIALS table
"""
sql = "INSERT INTO FINANCIALS (BEGIN, GCO, CTA, SUPPORT, SC1, SC2, SC3, AD1) VALUES (#{}#, {}, {}, {}, {}, {}, {},{})"\
.format(self.begin, self.gco, self.cta, self.support, self.sc1, self.sc2, self.sc3, self.ad1)
cursor.execute(sql)
class S7SC3:
"""
Miscellaneous S7 SC3 stuff. The new table is identical to the old one.
"""
def __init__(self, begin, month, year, p1ot, p2ot, totchg, succchg, chgwithinc, fldchg, egychg):
self.begin = begin
self.p1ot = p1ot
self.p2ot = p2ot
self.changes = totchg
self.successful = succchg
self.incidents = chgwithinc
self.failed = fldchg
self.emergency = egychg
def __repr__(self):
return "{} P1:{} P2:{} CHG:{} SUC:{} INC:{} FLD:{} EGY:{}"\
.format(self.period.strftime("%m/%y"), self.p1ot, self.p1ot, self.changes, self.successful, self.incidents, self.failed, self.emergency)
def Process(self, cursor):
"""
Inserts a record in to the Access database
"""
sql = "INSERT INTO SC3 (BEGIN, P1OT, P2OT, CHANGES, SUCCESSFUL, INCIDENTS, FAILED, EMERGENCY) VALUES\
(#{}#, {}, {}, {}, {}, {}, {}, {})"\
.format(self.begin, self.p1ot, self.p2ot, self.changes, self.successful, self.incidents, self.failed, self.emergency)
cursor.execute(sql)
def ConnectToAccessFile():
"""
Prompts the user for an Access database file, connects, creates a cursor,
cleans out the tables which are to be replaced, gets a hash of the facilities
table keyed on facility name returning facility id
"""
# Prompts the user to select which Access DB file he wants to use and then attempts to connect
root = Tk()
dbname = tkFileDialog.askopenfilename(parent=root, title="Select output database", filetypes=[('Access 2010', '*.accdb')])
root.destroy()
# Connect to the Access (new) database and clean its existing incidents etc. tables out as
# these will be replaced with the new data
dbcxn = pyodbc.connect("Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ="+dbname+";")
dbcursor=dbcxn.cursor()
print("Connected to {}".format(dbname))
for table in ["INCIDENTS", "AD1", "LRO", "NLRO", "CTAS", "SC3", "PRODUCTIONS", "FINANCIALS"]:
print("Clearing table {}...".format(table))
dbcursor.execute("DELETE * FROM {}".format(table))
# Get the list of facilities from the Access database...
dbcursor.execute("SELECT id, facility FROM facilities")
rows = dbcursor.fetchall()
dbfacilities = {unicode(row[1]):row[0] for row in rows}
return dbcxn, dbcursor, dbfacilities
# Entry point
incre = re.compile("INC\d{12}[A-Z]?") # Regex that matches incident references
try:
dbcxn, dbcursor, dbfacilities = ConnectToAccessFile()
# Connect to the MySQL S7 (old) database and read the incidents and ad1 tables
s7cxn = pyodbc.connect("DRIVER={MySQL ODBC 3.51 Driver}; SERVER=localhost;DATABASE=s7; UID=root; PASSWORD=********; OPTION=3")
print("Connected to MySQL S7 database")
s7cursor = s7cxn.cursor()
s7cursor.execute("""
SELECT id_incident, priority, begin, acknowledge,
diagnose, workaround, fix, handoff, lro, nlro,
facility, ctas, summary, raised, code FROM INCIDENTS""")
rows = s7cursor.fetchall()
# Discard any incidents which don't have a reference of the form INC... as they are ancient
print("Fetching incidents")
s7incidents = {unicode(row[0]):S7Incident(*row) for row in rows if incre.match(row[0])}
# Get the list of productions from the S7 database to replace the one we've just deleted ...
print("Fetching productions")
s7cursor.execute("SELECT DISTINCT RAISED FROM INCIDENTS")
rows = s7cursor.fetchall()
s7productions = [r[0] for r in rows]
# ... now get the AD1s ...
print("Fetching AD1s")
s7cursor.execute("SELECT id_ad1, date, ref, commentary, adjustment from AD1")
rows = s7cursor.fetchall()
s7ad1s = [S7AD1(*row) for row in rows]
# ... and the financial records ...
print("Fetching Financials")
s7cursor.execute("SELECT month, year, gco, cta, support, sc1, sc2, sc3, ad1 FROM Financials")
rows = s7cursor.fetchall()
s7financials = [S7Financial(*row) for row in rows]
print("Writing financials ({})".format(len(s7financials)))
[p.Process(dbcursor) for p in s7financials]
# ... and the SC3s.
print("Fetching SC3s")
s7cursor.execute("SELECT begin, month, year, p1ot, p2ot, totchg, succhg, chgwithinc, fldchg, egcychg from SC3")
rows = s7cursor.fetchall()
s7sc3s = [S7SC3(*row) for row in rows]
print("Writing SC3s ({})".format(len(s7sc3s)))
[p.Process(dbcursor) for p in s7sc3s]
# Re-create the productions table in the new database. Note we refer to production
# by number in the incidents table so need to do the SELECT ##IDENTITY to give us the
# autonumber index. To make sure everything is case-insensitive convert the
# hash keys to UPPERCASE.
dbproductions = {}
print("Writing productions ({})".format(len(s7productions)))
for p in sorted(s7productions):
dbcursor.execute("INSERT INTO PRODUCTIONS (PRODUCTION) VALUES ('{}')".format(p))
dbcursor.execute("SELECT ##IDENTITY")
dbproductions[p.upper()] = dbcursor.fetchone()[0]
# Now process the incidents etc. that we have retrieved from the S7 database
print("Writing incidents ({})".format(len(s7incidents)))
[s7incidents[k].ProcessIncident(dbcursor, dbfacilities, dbproductions) for k in sorted(s7incidents)]
# Match the new parent incident IDs in the AD1s and then write to the new table. Some
# really old AD1s don't have the parent incident reference in the REF field, it is just
# mentioned SOMEWHERE in the commentary. So if the REF field doesn't match then do a
# re.search (not re.match!) for it. It isn't essential to match these older AD1s with
# their parent incident, but it is quite useful (and tidy).
print("Matching and writing AD1s".format(len(s7ad1s)))
for a in s7ad1s:
if a.ref in s7incidents:
a.SetPID(s7incidents[a.ref].dbid)
a.SetProduction(s7incidents[a.ref].production)
else:
z=incre.search(a.commentary)
if z and z.group() in s7incidents:
a.SetPID(s7incidents[z.group()].dbid)
a.SetProduction(s7incidents[z.group()].production)
a.Process(dbcursor)
print("Comitting changes")
dbcursor.commit()
finally:
print("Closing databases")
dbcxn.close()
s7cxn.close()
It turns out that the file has additional complications in terms of mangled data which will require a degree of processing which is a pain to do in Excel but trivially simple in Python. So I will re-use some Python 2.x scripts which use the XLWT/XLRD libraries to munge the spreadsheet.

Resources