I'm rather new to the whole ORM topic, and I've already searched forums and docs.
The question is about a flask application with SQLAlchemy as ORM for the PostgreSQL.
The __init__.py contains the following line:
db = SQLAlchemy()
the created object is referenced in the other files to access the DB.
There is a save function for the model:
def save(self):
db.session.add(self)
db.session.commit()
and also an update function:
def update(self):
for var_name in self.__dict__.keys():
if var_name is not ('_sa_instance_state' or 'id' or 'foreign_id'):
# Workaround for JSON update problem
flag_modified(self, var_name)
db.session.merge(self)
db.session.commit()
The problem occurs when I'm trying to save a new object. The save function writes it to DB, it's visible when querying the DB directly (psql, etc.), but a following ORM query like:
model_list = db.session.query(MyModel).filter(MyModel.foreign_id == this_id).all()
gives an empty response.
A call of the update function does work as expected, new data is visible when requesting with the ORM.
I'm always using the same session object for example this:
<sqlalchemy.orm.scoping.scoped_session object at 0x7f0cff68fda0>
If the application is restarted everything works fine until a new object was created and tried to get with the ORM.
An unhandsome workaround is using raw SQL like:
model_list = db.session.execute('SELECT * FROM models_table WHERE
foreign_id = ' + str(this_id))
which gives a ResultProxy with latest data like this:
<sqlalchemy.engine.result.ResultProxy object at 0x7f0cf74d0390>
I think my problem is a misunderstanding of the session. Can anyone help me?
It figured out that the problem has nothing to do with the session, but the filter() method:
# Neccessary import for string input into filter() function
from sqlalchemy import text
# Solution or workaround
model_list = db.session.query(MyModel).filter(text('foreign_key = ' + str(this_id))).all()
I could not figure out the problem with:
filter(MyModel.foreign_id == this_id) but that's another problem.
I think this way is better than executing raw SQL.
Related
I have a form setup like this where I use the values from TestModel to create a select field in a form. However if I want to update the TestModel to add a new column, Django gives me a psycopg2.errors.UndefinedColumn saying the new column is not found on the table and the stack trace points back to this form ModelChoiceField. Its not until I comment out this select that I am able to make the migration to the database.
So my question is there a better way to setup this ModelChoiceField where I don't have to comment it out to update the underlying model on the database? I'm using Django 4.0 and python 3.8 for reference.
class TestModelChoiceField(forms.ModelChoiceField):
"""
Overwriting model choice field attribute to use
a different __str__ representation than the default
model
"""
def label_from_instance(self, obj):
return obj.test_field
class TestForm(forms.Form):
first_select = TestModelChoiceField(
queryset=TestModel.objects.filter(model_attribute__isnull=False),
initial=TestModel.objects.filter(model_attribute__isnull=False)
.filter(test_field="Yes")
.first()
.team_id,
label="Field One:",
)
When it comes to needing database access in a form like this, you're best waiting until the form is initialised rather than at runtime like you are doing here. With it setup like you have, when you start your app the form will get loaded and immediately try to query the database. If you wait until the app creates an instance of the form, you can be assured that the database will be ready/available.
So what you'd end up with is something like;
class TestForm(forms.Form):
first_select = TestModelChoiceField(
queryset=TestModel.objects.none(),
label="Field One:",
)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fields['first_select'].queryset = TestModel.objects.filter(model_attribute__isnull=False)
self.fields['first_select'].initial = TestModel.objects.filter(
model_attribute__isnull=False
).filter(test_field="Yes").first().team_id
I must be doing something wrong
I'm trying to loop through all documents in a collection and add the contents to a dictionary.
I added the data to a dictionary, but my intention was to loop through all docs:
firebase_admin.initialize_app(cred)
db = firestore.client()
blog_col = db.collection(u'blog')
class Art_cont:
def __init__(self,title,date,link):
self.title = title
self.date = date
self.link = link
def getLast4():
query = blog_col.order_by("date").limit_to_last(4).get()
for doc in query: #Need to find a way to loop through all docs, this doesn't work
db_title=doc.to_dict()["title"]
db_date=doc.to_dict()["date"]
db_link=doc.to_dict()["link"]
content1=Art_cont(db_title,db_date,db_link)
#here wrap them up with html and return them to app
print(content1.title,content1.date,content1.link)
When I run that code it only gives me the first doc content:
Vs the other docs that have the same structure:
Any advice would be appreciated.
Assuming that query is an object of your model, write db_title = doc.your_attribute_name.to_dict()['title'] and do the same for the other attributes.
Well, I almost had it, I ended up creating a function instead:
def getlist(x):
query = blog_col.order_by("date").limit_to_last(4).get()
docs=[]
for doc in query:
docs.append(doc)
docnum=docs[x]
db_title=docnum.to_dict()["title"]
db_date=docnum.to_dict()["date"]
db_link=docnum.to_dict()["link"]
content=Art_cont(db_title,db_date,db_link)
return content
print(getlist(0).title)
print(getlist(1).title)
etc...
Hope this helps anyone in a similar situation.
I manage to make a connection to a Google Cloud Datastore databased. Now I want to get some entities given their Key/Id. Right now I am doing the following:
from google.cloud import datastore
client = datastore.Client()
query = client.query(kind='City')
query.key_filter("325899977574122") -> Exception here
I get "Invalid key: '325899977574122'".
What could be the cause of error? That Id exist, a city does have that key/Id.
It looks like it needs to be of type google.cloud.datastore.key.Key
https://googleapis.dev/python/datastore/latest/queries.html#google.cloud.datastore.query.Query.key_filter
Also, 325899977574122 is probably supposed to be cast to a long
So something like this:
client = datastore.Client()
query = client.query(kind='City')
query.key_filter(Key('City', 325899977574122L, project=project))
EDIT:
Also if youre trying to retrieve a single id, you should probably use this:
https://googleapis.dev/python/datastore/latest/client.html#google.cloud.datastore.client.Client.get
client = datastore.Client()
client.get(Key('City', 325899977574122L, project=project))
Fetching by ID is faster than doing a query
I am trying to build a database driver for Peewee and i'm having trouble getting the save() method to fill in the primary key/id for objects. Here's some sample code:
from datetime import date
from peewee import BooleanField
from peewee import CharField
from peewee import DateField
from peewee import ForeignKeyField
from peewee import IntegerField
from peewee import Model
from SQLRelay import PySQLRDB
from sqlrelay_ext import SQLRelayDatabase
DB = SQLRelayDatabase('test2', host='<host>', user='<un>', password='<pwd>')
class BaseModel(Model):
class Meta:
database = DB
class Person(BaseModel):
name = CharField()
birthday = DateField()
is_relative = BooleanField()
class Pet(BaseModel):
owner = ForeignKeyField(Person, backref='pets')
name = CharField()
animal_type = CharField()
DB.connect()
Person.create_table(safe=False)
Pet.create_table(safe=False)
uncle_bob = Person(name='Bob', birthday=date(1960, 1, 15), is_relative=True)
uncle_bob.save() # bob is now stored in the database
print('Uncle Bob id: {}'.format(uncle_bob.id))
print('Uncle Bob _pk: {}'.format(uncle_bob._pk))
Both uncle_bob.id and uncle_bob._pk are None after .save(). From digging into the peewee.py code, it seems that the _WriteQuery.execute() method is supposed to set the _pk attribute, but that isn't happening. My best guess is that the cursor implementation isn't acting properly. Does anyone have more insight than this that can maybe help me track down this problem?
Thanks!
Edit to answer:
For SQL Server, the following code allows you to return the last inserted id:
def last_insert_id(self, cursor, query_type=None):
try:
cursor.execute('SELECT SCOPE_IDENTITY()')
result = cursor.fetchone()
return result[0]
except (IndexError, KeyError, TypeError):
pass
In your SQLRelayDatabase implementation, you will probably need to correctly implement the last_insert_id() method. For python db-api 2.0 drivers, this typically looks like cursor.lastrowid.
The default implementation is:
def last_insert_id(self, cursor, query_type=None):
return cursor.lastrowid
Where cursor is the cursor object used to execute the insert query.
Databases like Postgresql do not implement this -- instead you execute an INSERT...RETURNING query, so the Postgres implementation is a bit different. The postgres implementation ensures that your insert query includes a RETURNING clause, and then grabs the id returned.
Depending on your DB and the underlying DB-driver, you'll need to pull that last insert id out somehow. Peewee should handle the rest assuming last_insert_id() is implemented.
I've recently started using PyMongo as an interface with MongoDB. But I'm having some strange issues when deleting documents from a collection.
Here is an example:
from bson import ObjectId
from pymongo import MongoClient
# Open connection
client = MongoClient(mongo_html)
collection_post = client["MyCollection"].posts
# Delete procedure
_ids_to_delete = [ObjectID("xxxxxxx..."), ..., ObjectID("xxxxxxx...")]
n_to_delete = len(_ids_to_delete)
result = collection_post.delete_many({'_id': {'$in': _ids_to_delete}})
n_delete = result.deleted_count
if n_delete != n_to_delete:
raise Exception("Well well well...")
Now, I know for a fact that all the documents in _ids_to_delete exist in the database, in fact, if I run the following if the exception is raised
if n_delete != n_to_delete:
for _id in _ids_to_delete:
search_result = collection_post.find({'_id': _id})
It will still find documents that were supposed to be deleted. To get around this, I tried using delete_one() instead and looping, with similar results.
Am I missing something here? Does the fact that another process on an another computer is writing to the same collection at the same time have this effect?