I'm debugging existing code. I'm trying to find out the intention of the obviously wrong access to .dicts of a peewee Model in the warning statement in MyDbBackend.store and how I could correct that.
I guess that the warning message should add more detailed output to the model which could not be saved. However, the .dicts attribute exists in orm.BaseQuery class, only.
The output message is currently not very helpful. I want to provide an improved warning message given that the i.save fails. With "improved" i mean to provide some meta informations about the record which failed to be saved.
So, how can i obtain the BaseQuery from the model and what would .dicts output, then? Would that information be useful in the context of the warning message?
import peewee as orm
database = orm.Proxy()
class ModelBase(orm.Model):
class Meta:
database = database
class MyModel(ModelBase):
dtfield = orm.DateTimeField(null=True)
intfield = orm.IntegerField(null=True)
floatfield = orm.FloatField(null=True)
class MyDbBackend:
def __init__(self, database):
self.db = database
self.records = [] # Holds objects derived from ModelBase
[...]
def store(self):
with self.db.atomic():
for i in self.records:
try:
i.save()
except Exception as e:
logger.warning("could not save record: {}".format(i.dicts()))
raise e
self.clear()
->
logger.warning("could not save record: {}".format(i.dicts()))
AttributeError: 'MyModel' object has no attribute 'dicts'
I guess that the original code was meant to make use of playhouse.shortcuts.model_to_dict.
This is the only idea I have why the original code uses i.dict().
Perhaps some misunderstanding.
import peewee as orm
from playhouse.shortcuts import model_to_dict
[...]
logger.warning(f"Model dict: {model_to_dict(i, recurse = True, max_depth = 2)}")
[...]
Related
I am trying to build a marshmallow scheme to both load and dump data. And I get everything OK except one field.
Problem description
(If you understand the problem, you don't have to read this).
For load data its type is Decimal. And I used it like this before. Now I want to use this schema for dumping and for that my flask API responses with: TypeError: Object of type Decimal is not JSON serializable. OK, I understand. I changed the type to Float. Then my legacy code started to get an exception while trying to save that field to database (it takes Decimal only). I don't want to change the legacy code so I looked for any solution at the marshmallow docs and found load_only and dump_only params. It seems like those are what I wanted, but here is my problem - I want to set them to the same field. So I just wondered if I can define both fields and tried this:
class PaymentSchema(Schema):
money = fields.Decimal(load_only=True)
money = fields.Float(dump_only=True)
I have been expected for a miracle, of course. Actually I was thinking that it will skip first definition (correctly, re-define it). What I got is an absence of the field at all.
Workaround solution
So I tried another solution. I created another schema for dump and inherit it from the former schema:
class PaymentSchema(Schema):
money = fields.Decimal(load_only=True)
class PaymentDumpSchema(PaymentSchema):
money = fields.Float(dump_only=True)
It works. But I wonder if there's some another, native, "marshmallow-way" solution for this. I have been looking through the docs but I can't find anything.
You can use the marshmallow decorator #pre_load in this decorator you can do whatever you want and return with your type
from marshmallow import pre_load
import like this and in this you will get your payload and change the type as per your requirement.
UPD: I found a good solution finally.
NEW SOLUTION
The trick is to define your field in load_fields and dump_fields inside __init__ method.
from marshmallow.fields import Integer, String, Raw
from marshmallow import Schema
class ItemDumpLoadSchema(Schema):
item = Raw()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if not (self.only and 'item' not in self.only) and \
not (self.exclude and 'item' in self.exclude):
self.load_fields['item'] = Integer(missing=0)
self.dump_fields['item'] = String()
Usage:
>>> ItemDumpLoadSchema().load({})
{'item': 0}
>>> ItemDumpLoadSchema().dump({'item': 0})
{'item': '0'}
Don't forget to define field in a schema with some field (Raw in my example) - otherwise it may raise an exception in some cases (e.g. using of only and exclude keywords).
OLD SOLUTION
A little perverted one. It based on #prashant-suthar answer. I named load field with suffix _load and implemented #pre_load, #post_load and error handling.
class ArticleSchema(Schema):
id = fields.String()
title = fields.String()
text = fields.String()
class FlowSchema(Schema):
article = fields.Nested(ArticleSchema, dump_only=True)
article_load = fields.Int(load_only=True)
#pre_load
def pre_load(self, data, *args, **kwargs):
if data.get('article'):
data['article_load'] = data.pop('article')
return data
#post_load
def post_load(self, data, *args, **kwargs):
if data.get('article_load'):
data['article'] = data.pop('article_load')
return data
def handle_error(self, exc, data, **kwargs):
if 'article_load' in exc.messages:
exc.messages['article'] = exc.messages.pop('article_load')
raise exc
Why the old solution is not a good solution?
It doesn't allow to inheritate schemas with different handle_error methods defined. And you have to name pre_load and post_load methods with different names.
pass data_key argument to the field definition
Documentation mentions, data_key parameter can be used along with dump_only or load_only to be able to have same field with different functionality.
So you can write your schema as...
class PaymentSchema(Schema):
decimal_money = fields.Decimal(data_key="money", load_only=True)
money = fields.Float(dump_only=True)
This should solve your problem. I am using data_key for similar problem in marshmallow with SQLAlchemyAutoSchema and this fixed my issue.
Edit
Note: The key in ValidationError.messages (error messages) will be decimal_money by default. You may tweak the handle_error method of Schema class to replace decimal_money with money but it is not recommended as you yourself may not be able to differentiate between the error messages fields.
Thanks.
I'm trying to learn how to create python-based back-ends from some existing data that i have collected. I've come to realize that i definitely want to use sqlalchemy and that flask seems like a good library to go with it. My problem is that even after many hours of reading the sqlalchemy docs and browsing various answers on stackexchange i still don't understand how i can reshape data from an existing table into an object with a completely different structure.
The transformation i want to do is very concrete. I want to go from this structure in my MariaDB table:
Columns: company_name, date, indicators(1...23)
To this json output generated from a serialized class object:
{
"company_name[1]":
{
"indicator_name[1]":
{
"date[1]": "indicator_name[1].value[1]",
"date[2]": "indicator_name[1].value[2]",
"date[3]": "indicator_name[1].value[3]",
"date[4]": "indicator_name[1].value[4]",
"date[5]": "indicator_name[1].value[5]"
},
"indicator_name[2]":
{
"date[1]": "indicator_name[2].value[1]",
"date[2]": "indicator_name[2].value[2]",
"date[3]": "indicator_name[2].value[3]",
"date[4]": "indicator_name[2].value[4]",
"date[5]": "indicator_name[2].value[5]"
},
I found a great tutorial with which i can output the entire table record by record but the structure is not what i want, and i don't think creating the desired structure on the front-end makes sense in this case.
Here is the code that outputs the entire table to json record by record:
from flask import Flask, jsonify
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import PrimaryKeyConstraint
from sqlalchemy import orm
from sqlalchemy import select, func
from sqlalchemy import Column, Integer, String, ForeignKey
from flask_marshmallow import Marshmallow
import decimal
import flask.json
class MyJSONEncoder(flask.json.JSONEncoder): # Enables decimal queries for the API
def default(self, obj):
if isinstance(obj, decimal.Decimal):
# Convert decimal instances to strings.
return str(obj)
return super(MyJSONEncoder, self).default(obj)
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://USER:PASS#localhost:3306/kl_balance_sheets'
app.json_encoder = MyJSONEncoder
db = SQLAlchemy(app)
ma = Marshmallow(app)
# Bind declarative base to engine
db.Model.metadata.reflect(db.engine)
class CompanyData(db.Model):
__table__ = db.Model.metadata.tables['kl_balance_sheets']
class CompanyDataSchema(ma.ModelSchema):
class Meta:
model = CompanyData
#app.route('/')
def index():
company_data = CompanyData.query.all()
company_data_schema = CompanyDataSchema(many=True)
output = company_data_schema.dump(company_data).data
return jsonify({'company_data' : output})
if __name__ == '__main__':
app.run(debug=True)
My main question i guess is: How do i edit this code to produce the desired json?
What i think i should do is to create a custom constructor and then feed that into the index function but i can't figure out how to concretely do that. The two options i've come across are:
#orm.reconstructor
def init_on_load(self):
#do custom stuff
or:
class Foo(db.Model):
# ...
def __init__(**kwargs):
super(Foo, self).__init__(**kwargs)
# do custom stuff
To me this seems like a basic operation any flask-marshmallow user would be doing regularly. Could someone please explain how sql data is normally inserted into an object with a new structure and then serialized? In my case, do i need to change things mainly on the metadata, object or marshmallow level? I'm surprised i can't find some good examples of this.
I am trying to build a database driver for Peewee and i'm having trouble getting the save() method to fill in the primary key/id for objects. Here's some sample code:
from datetime import date
from peewee import BooleanField
from peewee import CharField
from peewee import DateField
from peewee import ForeignKeyField
from peewee import IntegerField
from peewee import Model
from SQLRelay import PySQLRDB
from sqlrelay_ext import SQLRelayDatabase
DB = SQLRelayDatabase('test2', host='<host>', user='<un>', password='<pwd>')
class BaseModel(Model):
class Meta:
database = DB
class Person(BaseModel):
name = CharField()
birthday = DateField()
is_relative = BooleanField()
class Pet(BaseModel):
owner = ForeignKeyField(Person, backref='pets')
name = CharField()
animal_type = CharField()
DB.connect()
Person.create_table(safe=False)
Pet.create_table(safe=False)
uncle_bob = Person(name='Bob', birthday=date(1960, 1, 15), is_relative=True)
uncle_bob.save() # bob is now stored in the database
print('Uncle Bob id: {}'.format(uncle_bob.id))
print('Uncle Bob _pk: {}'.format(uncle_bob._pk))
Both uncle_bob.id and uncle_bob._pk are None after .save(). From digging into the peewee.py code, it seems that the _WriteQuery.execute() method is supposed to set the _pk attribute, but that isn't happening. My best guess is that the cursor implementation isn't acting properly. Does anyone have more insight than this that can maybe help me track down this problem?
Thanks!
Edit to answer:
For SQL Server, the following code allows you to return the last inserted id:
def last_insert_id(self, cursor, query_type=None):
try:
cursor.execute('SELECT SCOPE_IDENTITY()')
result = cursor.fetchone()
return result[0]
except (IndexError, KeyError, TypeError):
pass
In your SQLRelayDatabase implementation, you will probably need to correctly implement the last_insert_id() method. For python db-api 2.0 drivers, this typically looks like cursor.lastrowid.
The default implementation is:
def last_insert_id(self, cursor, query_type=None):
return cursor.lastrowid
Where cursor is the cursor object used to execute the insert query.
Databases like Postgresql do not implement this -- instead you execute an INSERT...RETURNING query, so the Postgres implementation is a bit different. The postgres implementation ensures that your insert query includes a RETURNING clause, and then grabs the id returned.
Depending on your DB and the underlying DB-driver, you'll need to pull that last insert id out somehow. Peewee should handle the rest assuming last_insert_id() is implemented.
I've got a bunch of GIS tables in my model that I created in flaskSQLAlchemy. Each of these models has a 'geom' field which is a WKB object.
Which need to be JSON serialized into WKT or geojson, So that The API GET call would work.
I tried to use geoalchemy2 functions, but I'm stuck.
I use a flask marshmallow/marshmallow-sqlalchemy combo, and I tried something like the following, with no luck.
from geoalchemy2 import functions
from marshmallow import fields
class WKTSerializationField(fields.Field):
def _serialize(self, value, attr, obj):
if value is None:
return value
else:
if type(value).__name__ == 'WKBElement':
return functions.ST_AsEWKT(value)
else:
return None
class GISModelTableSchema(ma.ModelSchema):
class Meta:
model = GISModelTable
geom = WKTSerializationField(attribute="geom")
Please provide a code example if you can, how to serialize/deserialize a field in marshmallow alchemy. Or any answer is welcomed at this point.
Try to use marshmallow-sqlalchemy 'fields.Method()' and in the method use another method 'to_shape' from geoalchemy2.shape package. This will help you with serialization issue.
#!schemas.py
from marshmallow import fields
from marshmallow_sqlalchemy import ModelSchema
from geoalchemy2.shape import to_shape
from .models import YourModel
class YourModelSchema(ModelSchema):
your_geom_field = fields.Method("geom_to_dict")
#staticmethod
def geom_to_dict(obj):
point = to_shape(obj.your_geom_field)
return {
lat: point.y,
lon: point.x
}
class Meta:
model = YourModel
exclude = ("your_geom_field")
this migh help you with serialization, for desirialization you may read more detailed in geoalchemy2 api reference
Try to code all required fields by yourself, you may get more specific serialization in a form you want
After hours of debugging, and because my organization does not have a lot of Python expertise, I am turning to this community for help.
I am trying to follow this tutorial with the goal of committing some data to the database. Although no errors get reported, I am also not saving any rows. What am I doing wrong?
When trying to commit using the db2Session, I get:
Transaction must be committed using the transaction manager.
But nowhere in the tutorial, do I see the transaction manager being used. I thought that this manager is bound using zope.sqlalchemy? Yet, nothing is happening otherwise. Help again would be really appreciated!
I have the following setup in my main function in a Pyramid App:
from sqlalchemy import engine_from_config
from .models import db1Session, db2Session
def main(global_config, **settings):
""" This function returns a Pyramid WSGI application.
"""
db1_engine = engine_from_config(settings, 'db1.')
db2_engine = engine_from_config(settings, 'db2.')
db1Session.configure(bind=db1_engine)
db2Session.configure(bind=db2_engine)
In .models/__init__py, I have:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import (scoped_session, sessionmaker)
from zope.sqlalchemy import ZopeTransactionExtension
db1Session = scoped_session(sessionmaker(
extension=ZopeTransactionExtension()))
db2Session =
scoped_session(sessionmaker(extension=ZopeTransactionExtension()))
Base = declarative_base()
In ./model/db2.py I have:
class PlateWellResult(Base):
__tablename__ = 'SomeTable'
__table_args__ = {"schema": 'some_schema'}
id = Column("ID", Integer, primary_key=True)
plate_id = Column("PlateID", Integer)
hit_group_id = Column("HitID", Integer, ForeignKey(
'some_schema.HitGroupID.ID'))
well_loc = Column("WellLocation", String)
The relevant bits of my saving function look like this. ./lib/db2_api.py:
def save_selected_rows(input_data, selected_rows, hit_group_id):
""" Wrapper method for saving selected rows """
# Assume I have all the right data below.
new_hit_row = PlateWellResult(
plate_id=master_plate_id,
hit_group_id=hit_group_id,
well_loc=selected_df_row.masterWellLocation)
db1Session.add(new_hit_row)
# When I try the row below:
# db2Session.commit()
# I get: Transaction must be committed using the transaction manager
# If I cancel the line above, nothing gets committed.
return 'Save successful.'
That function is called from my viewer:
#view_config(route_name='some_routename', renderer='json',
permission='create_hit_group')
def save_to_hitgroup(self):
""" Backend to AJAX call to save selected rows to a hit_group """
try:
# Assume that all values were checked and all the right
# parameters are passed
status = save_selected_rows(some_result,
selected_rows_list,
hitgroup_id)
json_resp = json.dumps({'errors': [],
'status': status})
return json_resp
except Exception as e:
json_resp = json.dumps({'errors': ['Error during saving. {'
'0}'.format(e)],
'status': []})
return json_resp
The comments above are good. I just wanted to summarize here.
The transaction manager is begun/committed/aborted by pyramid_tm. If you aren't using that then it's likely the issue.
You are also squashing possible database exceptions which need to be conveyed to the transaction manager. You can do this via transaction.abort() in the exception handler.