pyodbc customized converter - decimal

When writing the customized the converter in pyodbc, how to call the default converter? I am interested in customize the SQL_DECIMAL data type conversion and have the trailing zero stripped. I am thinking the 2 following options:
1) calling the default converter for SQL_DECIMAL and then convert it str, and strip trailng zeros.
2) or unpack the SQL_DECIMAL directly, which I don't know how to do yet.
def decimal_converter1(val):
# calling pyodbc's own decimal converter
s = str(val)
return s.rstrip('0')
def decimal_converter2(val):
# struct.unpack(?, val)
return
cn.add_output_converter(pyodbc.SQL_DECIMAL, decimal_converter1)
pyodbc: 4.0.23, python 3.6.6, sql server 2014

Related

F Strings and Interpolation using a properties file

I have a simple python app and i'm trying to combine bunch of output messages to standardize output to the user. I've created a properties file for this, and it looks similar to the following:
[migration_prepare]
console=The migration prepare phase failed in {stage_name} with error {error}!
email=The migration prepare phase failed while in {stage_name}. Contact support!
slack=The **_prepare_** phase of the migration failed
I created a method to handle fetching messages from a Properties file... similar to:
def get_msg(category, message_key, prop_file_location="messages.properties"):
""" Get a string from a properties file that is utilized similar to a dictionary and be used in subsequent
messaging between console, slack and email communications"""
message = None
config = ConfigParser()
try:
dataset = config.read(prop_file_location)
if len(dataset) == 0:
raise ValueError("failed to find property file")
message = config.get(category, message_key).replace('\\n', '\n') # if contains newline characters i.e. \n
except NoOptionError as no:
print(
f"Bad option for value {message_key}")
print(f"{no}")
except NoSectionError as ns:
print(
f"There is no section in the properties file {prop_file_location} that contains category {category}!")
print(f"{ns}")
return f"{message}"
The method returns the F string fine, to the calling class. My question is, in the calling class if the string in my properties file contains text {some_value} that is intended to be interpolated by the compiler in the calling class using an F String with curly brackets, why does it return a string literal? The output is literal text, not the interpolated value I expect:
What I get The migration prepare phase failed while in {stage_name} stage. Contact support!
What I would like The migration prepare phase failed while in Reconciliation stage. Contact support!
I would like the output from the method to return the interpolated value. Has anyone done anything like this?
I am not sure where you define your stage_name but in order to interpolate in config file you need to use ${stage_name}
Interpolation in f-strings and configParser files are not the same.
Update: added 2 usage examples:
# ${} option using ExtendedInterpolation
from configparser import ConfigParser, ExtendedInterpolation
parser = ConfigParser(interpolation=ExtendedInterpolation())
parser.read_string('[example]\n'
'x=1\n'
'y=${x}')
print(parser['example']['y']) # y = '1'
# another option - %()s
from configparser import ConfigParser, ExtendedInterpolation
parser = ConfigParser()
parser.read_string('[example]\n'
'x=1\n'
'y=%(x)s')
print(parser['example']['y']) # y = '1'

__post_init__ of python 3.x dataclasses is not called when loaded from yaml

Please note that I have already referred to StackOverflow question here. I post this question to investigate if calling __post_init__ is safe or not. Please check the question till the end.
Check the below code. In step 3 where we load dataclass A from yaml string. Note that it does not call __post_init__ method.
import dataclasses
import yaml
#dataclasses.dataclass
class A:
a: int = 55
def __post_init__(self):
print("__post_init__ got called", self)
print("\n>>>>>>>>>>>> 1: create dataclass object")
a = A(33)
print(a) # print dataclass
print(dataclasses.fields(a))
print("\n>>>>>>>>>>>> 2: dump to yaml")
s = yaml.dump(a)
print(s) # print yaml repr
print("\n>>>>>>>>>>>> 3: create class from str")
a_ = yaml.load(s)
print(a_) # print dataclass loaded from yaml str
print(dataclasses.fields(a_))
The solution that I see for now is calling __-post_init__ on my own at the end like in below code snippet.
a_.__post_init__()
I am not sure if this is safe recreation of yaml serialized dataclass. Also, it will pose a problem when __post_init__ takes kwargs in case when dataclass fields are dataclasses.InitVar type.
This behavior is working as intended. You are dumping an existing object, so when you load it pyyaml intentionally avoids initializing the object again. The direct attributes of the dumped object will be saved even if they are created in __post_init__ because that function runs prior to being dumped. When you want the side effects that come from __post_init__, like the print statement in your example, you will need to ensure that initialization occurs.
There are few ways to accomplish this. You can use either the metaclass or adding constructor/representer approaches described in pyyaml's documentation. You could also manually alter the dumped string in your example to be ''!!python/object/new:' instead of ''!!python/object:'. If your eventual goal is to have the yaml file generated in a different manner, then this might be a solution.
See below for an update to your code that uses the metaclass approach and calls __post_init__ when loading from the dumped class object. The call to cls(**fields) in from_yaml ensures that the object is initialized. yaml.load uses cls.__new__ to create objects tagged with ''!!python/object:' and then loads the saved attributes into the object manually.
import dataclasses
import yaml
#dataclasses.dataclass
class A(yaml.YAMLObject):
a: int = 55
def __post_init__(self):
print("__post_init__ got called", self)
yaml_tag = '!A'
yaml_loader = yaml.SafeLoader
#classmethod
def from_yaml(cls, loader, node):
fields = loader.construct_mapping(node, deep=True)
return cls(**fields)
print("\n>>>>>>>>>>>> 1: create dataclass object")
a = A(33)
print(a) # print dataclass
print(dataclasses.fields(a))
print("\n>>>>>>>>>>>> 2: dump to yaml")
s = yaml.dump(a)
print(s) # print yaml repr
print("\n>>>>>>>>>>>> 3: create class from str")
a_ = yaml.load(s, Loader=A.yaml_loader)
print(a_) # print dataclass loaded from yaml str
print(dataclasses.fields(a_))

Code incompatibility issues - Python 2.x/ Python 3.x

I have this code:
from abc import ABCMeta, abstractmethod
class Instruction (object):
__metaclass__ = ABCMeta
def __init__(self, identifier_byte):
#type: (int) ->
self.identifier_byte = identifier_byte
#abstractmethod
def process (self):
print ("Identifier byte: ()".format(self.identifier_byte))
class LDAInstruction (Instruction):
def process (self):
super(Instruction,self).process()
with works fine with Python 3.2 but not with 2.6. Then based on this topic: TypeError: super() takes at least 1 argument (0 given) error is specific to any python version?
I changed the last line to:
super(Instruction,self).process()
which causes this error message on this precise line:
AttributeError: 'super' object has no attribute 'process'
For me it seems that there is a "process" method for the super invocation. Is Python saying that "super" is an independent object, unrelated to instruction? If yes, how can I tell it that super shall only invoke the base class constructor?
If not, how I shall proceed? Thanks for any ideas.
You're passing the wrong class to super in your call. You need to pass the class you're making the call from, not the base class. Change it to this and it should work:
super(LDAInstruction, self).process()
It's unrelated to your main error, but I'd further note that the base-class implementation of process probably has an error with its attempt at string formatting. You probably want {0} instead of () in the format string. In Python 2.7 and later, you could omit the 0, and just use {}, but for Python 2.6 you have to be explicit.

Using re.sub with jinja2.Markup escaping in Python 3.6?

So I have the following function in my Flask app.
def markup_abbreviations(txt, match_map, match_regex):
html = Markup.escape(txt)
sub_template = Markup('<abbr title="%s">%s</abbr>')
def replace_callback(m):
return sub_template % (match_map[m.group(0)], m.group(0))
return match_regex.sub(replace_callback, html)
Example arguments:
txt = 'blah blah blah etc., blah blah'
match_map = {
'etc.': 'et cetera',
'usu.': 'usually',
}
match_regex = re.compile(
'|'.join(r'\b' + re.escape(k) for k in match_map)
)
This was working very well and turning "etc." into "<abbr title=\"et cetera\">etc.</abbr>" and so on in my local Python 3.3 machine.
Then I figure I want to deploy to Heroku, and it says it only supports the latest python, which is Python 3.6.1. It's different from the one I got locally, but, eh, whatever. It works... mostly.
Except my function above gives me "<abbr title="et cetera">etc.</abbr>" now.
I assume between Python 3.3 and Python 3.6 the re.sub implementation must have changed somehow and now it no longer uses the passed string methods to create the output. So Markup's auto-escaping methods aren't used. Instead a new str is built from scratch. Which is why re.sub only returns str now, and not Markup anymore.
How can I use re.sub with jinja2.Markup in Python 3.6 and make my function work once again?
The Markup class just "mark" string as safe for html. It means that the string doesn't have to be escaped when it placed in to the template.
When the re.sub() return new str object what you have to do is mark the new object as safe (wrap it in the Markup).
def markup_abbreviations(txt, match_map, match_regex):
html = Markup.escape(txt)
sub_template = '<abbr title="%s">%s</abbr>'
def replace_callback(m):
return sub_template % (match_map[m.group(0)], m.group(0))
return Markup(match_regex.sub(replace_callback, html))
I check all "What's new" from Python 3.3 to 3.6 and there is nothing about changing behavior of re module (well there is something but it shouldn't be connected with your problem). Maybe someone else know what happend...

mod_wsgi with python 3.4 get an error sequence of byte string values expected, value of type list found

Im trying to return to the client mysql data and i get
mod_wsgi (pid=2304): Exception occurred processing WSGI script TypeError: sequence of byte string values expected, value of type list found\r
def application(environ, start_response):
result = ChildClass().getValue()
status = '200 OK'
output = result
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
print(output)
return [output]
class ChildClass(): # define child class
print('ppp')
def __init__(self):
print("Calling child constructor")
def childMethod(self):
print('Calling child method')
#Parentclass().parentMethod()
def getValue(self):
# Open database connection
db = mysql.connector.connect(user='root', password='55118',host='127.0.0.1',database='test')
cursor = db.cursor()
query = ("SELECT * from employees2")
cursor.execute(query)
#for (first_name) in cursor:
return cursor.fetchall()
How convert cursor.fetchall to bytes?
If you are following the modwsgi readthedocs it provides a small snippet to check if mod_wsgi is working on your server. However, I found that the code fails when using Python 3.4 and Django 1.9.2 with Apache and mod_wsgi for Python 3 module installed. I would keep getting "TypeError: sequence of byte string values expected, value of type str found".
The answer was to explicitly put 'b' in front of my strings to make them byte strings instead of default unicode. So the fix was to say:
output = b'Hello World!'
And when returning and the bottom make sure you are returning as a list, e.g.:
return [output]
This stumped me for hours until I finally had to read PEP 3333 (https://www.python.org/dev/peps/pep-3333/#a-note-on-string-types) and read the "Note on Strings" section.
I just add, you have to write html head with
<meta charset="utf-8">
and after your utf-8 will dysplay well.

Resources