I'm using flask-login and MongoDB as my database to store user profiles.
When checking whether a user is authenticated in my login function:
#bp.route('/login', methods=['GET', 'POST'])
def login():
if current_user.is_authenticated:
return redirect(url_for('routes.index'))
I'm getting the following error:
AttributeError: 'BaseQuerySet' object has no attribute 'is_authenticated'
My User object is extending UserMixin from flask-login.
Any idea what might me wrong?
OK i found it - in my user_loader function i did something like that:
#login.user_loader
def load_user(id):
return User.objects(_id=ObjectId(id))
whereas the correct way to get a single result (not entire collection) using mongoengine (which apparently translates _id into id), would be this:
#login.user_loader
def load_user(id):
return User.objects(id=ObjectId(id)).first()
As you said in your answer, the interface to MongoEngine requires id instead of _id in the queries. However if you check the object representation, the ID of a Document is still stored in the ._id variable.
Also you don't need to convert the_id to an ObjectId() using ObjectId(the_id), and you also can use the User.objects.get(id=the_id) function to get a single Document instead of User.objects(id=the_id).first() as in:
#login.user_loader
def load_user(user_id):
try:
return User.objects.get(id=user_id)
except Exception as e:
print(e)
raise
Related
I noticed that when having a Model such as :
class User(Model):
id = ...
books = relationship('Book')
When calling user.books for the first time, SQLAlchemy query the database (when lazy='select' for instance, which is the default), but sub-sequent call to user.books don't call the database. The results seems to have been cached.
I'd like to have the same feature from SQLAlchemy when using a method that query, for instance:
class User:
def get_books(self):
return Book.query.filter(Book.user_id == self.id).all()
But when doing that, if I call 3 times get_books(), SQLAlchemy does call the database 3 times (when setting the ECHO property to True).
How can I change get_books() to use the caching system from SQLAlchemy ?
I insist to mention "from SQLAlchemy" because I believe they handle the refresh/expunge/flush system and changes are then re-queried to the DB if one of these happened. Opposed to if I were to simply create a caching property in the model with a simple:
def get_books(self):
if self._books is None:
self._books = Book.query.filter(Book.user_id == self.id).all()
return self._books
This does not work well with flush/refresh/expunge from SQLAlchemy.
So, How can I change get_books() to use the caching system from SQLAlchemy ?
Edit 1:
I realized that the solution provided under is not perfect, because it caches for the current object. If you have two instances of the same user, and call get_books on both, two queries will be made because the caching applies only on the instance, not globally, contrary to SQLAlchemy.
The reason is simple - I believe - but still unclear how to apply it in my case: The object is defined at the class level, not the instance (books = relationship()), and they build their own query internally, so they can cache it based on the query.
In the solution I gave, the memoize_getter is unaware of the query made, and as such, cannot cache it for the same value accros multiple instance, so any identical call made to another instance will query the database.
Original answer:
I've been trying to wrap my head around SQLAlchemy's code (wow that's dense!), and I think I figured it out!
A relationship, at least when being set as "lazy='select'" (default), is a InstrumentedAttribute, which contains a get function that does the following :
def __get__(self, instance, owner):
if instance is None:
return self
dict_ = instance_dict(instance)
if self._supports_population and self.key in dict_:
return dict_[self.key]
else:
try:
state = instance_state(instance)
except AttributeError as err:
util.raise_(
orm_exc.UnmappedInstanceError(instance),
replace_context=err,
)
return self.impl.get(state, dict_)
So, a basic caching system, respecting SQLAlchemy, would be something like:
from sqlalchemy.orm.base import instance_dict
def get_books(self):
dict_ = instance_dict(self)
if 'books' not in dict_:
dict_['books'] = Book.query.filter(Book.user_id == self.id).all()
return dict_['books']
Now, we can push the vice a bit further, and do ... a decorator (oh sweet):
def memoize_getter(f):
#functools.wraps(f)
def decorator(instance, *args, **kwargs):
property_name = f.__name__.replace('get_', '')
dict_ = instance_dict(instance)
if property_name not in dict_:
dict_[property_name] = f(instance, *args, **kwargs)
return dict_[property_name]
return decorator
Thus transforming the original method to :
class User:
#memoize_getter
def get_books(self):
return Book.query.filter(Book.user_id == self.id).all()
If someone has a better solution, I'm eagerly interested!
i am trying to make a bot using selenium
so far i have made a login function which logs me into my account and takes me to my profile
class insta_logger():
'''
sets the driver and opens up the chrome;;;
then it takes the username and password from the user and helps him log in
using the logger function
'''
def __init__(self,username,password):
self.browser = webdriver.Firefox()
self.username=username
self.password=password
def login_in_func(self):
self.browser.get("https://instagram.com")
sleep(2)
driver.fullscreen_window()
login_key=self.browser.find_element_by_xpath('//input[#name="username"]')
login_pass=self.browser.find_elements_by_css_selector('form input')[1]
login_key.send_keys(self.username)
login_pass.send_keys(self.password)
login_pass.send_keys(Keys.ENTER)
sleep(6)
this totally works fine
but what i want is that it should go to a particular insta_id and take the no of posts there are and like them one by one
so i wrote this liker function on my own
just to test whether it is able to take on the no or not,
I tried to test it first
so that it can print out the no's atleast..
def liker(self,username):
sleep(3)
self.browser.get("https://instagram.com/"+username+'/')
posts=self.browser.find_element_by_class_name('g47SY')
#this class name changes if full screen is not present
#so take care of that
if posts :
try:
for post in int(posts):
print(post)
except Exception as e:
print(e)
finally:
sleep(3)
but it gave me a weird error
int() argument must be a string, a bytes-like object or a number, not 'FirefoxWebElement'
any help about how can i take the no of posts as int will help
this explains better of what int I want
posts is an Element, something like: <span class="g47SY ">3</span>.
You want the text inside of it, so you should do for post in range(int(posts.text)).
For the class that changes if the browser is on fullscreen or not, you can write a function to check if an element is present or not by using a try/except block:
def class_present(self, class_name):
try:
element = self.browser.find_element_by_class_name(class_name)
return element
except:
return False
And then:
xpath_full = '//*[#id="react-root"]/section/main/div/header/section/ul/li[1]/span/span'
xpath_mobile = '//*[#id="react-root"]/section/main/div/ul/li[1]/span/span'
posts = self.browser.find_element_by_xpath(xpath_full) if class_present('g47SY') else self.browser.find_element_by_class_name(xpath_mobile)
I've been researching a lot, but I haven't found a way.
I have Document clases with a _owner attribute which specifies the ObjectID of the owner, which is a per-request value, so it's globally available. I would like to be able to set part of the query by default.
For example, doing this query
MyClass.objects(id='12345')
should be the same as doing
MyClass.objects(id='12345', _owner=global.owner)
because _owner=global.owner is always added by default
I haven't found a way to override objects, and using a queryset_classis someway confusing because I still have to remember to call a ".owned()" manager to add the filter every time I want to query something.
It ends up like this...
MyClass.objects(id='12345').owned()
// same that ...
MyClass.objects(id='12345', _owner=global.owner)
Any Idea? Thanks!
The following should do the trick for querying (example is simplified by using a constant owned=True but it can easily be extended to use your global):
class OwnedHouseWrapper(object):
# Implements descriptor protocol
def __get__(self, instance, owner):
return House.objects.filter(owned=True)
def __set__(self, instance, value):
raise Exception("can't set .objects")
class House(Document):
address = StringField()
owned = BooleanField(default=False)
class OwnedHouse:
objects = OwnedHouseWrapper()
House(address='garbage 12', owned=True).save()
print(OwnedHouse.objects()) # [<House: House object>]
print(len(OwnedHouse.objects)) # 1
I am trying to build a marshmallow scheme to both load and dump data. And I get everything OK except one field.
Problem description
(If you understand the problem, you don't have to read this).
For load data its type is Decimal. And I used it like this before. Now I want to use this schema for dumping and for that my flask API responses with: TypeError: Object of type Decimal is not JSON serializable. OK, I understand. I changed the type to Float. Then my legacy code started to get an exception while trying to save that field to database (it takes Decimal only). I don't want to change the legacy code so I looked for any solution at the marshmallow docs and found load_only and dump_only params. It seems like those are what I wanted, but here is my problem - I want to set them to the same field. So I just wondered if I can define both fields and tried this:
class PaymentSchema(Schema):
money = fields.Decimal(load_only=True)
money = fields.Float(dump_only=True)
I have been expected for a miracle, of course. Actually I was thinking that it will skip first definition (correctly, re-define it). What I got is an absence of the field at all.
Workaround solution
So I tried another solution. I created another schema for dump and inherit it from the former schema:
class PaymentSchema(Schema):
money = fields.Decimal(load_only=True)
class PaymentDumpSchema(PaymentSchema):
money = fields.Float(dump_only=True)
It works. But I wonder if there's some another, native, "marshmallow-way" solution for this. I have been looking through the docs but I can't find anything.
You can use the marshmallow decorator #pre_load in this decorator you can do whatever you want and return with your type
from marshmallow import pre_load
import like this and in this you will get your payload and change the type as per your requirement.
UPD: I found a good solution finally.
NEW SOLUTION
The trick is to define your field in load_fields and dump_fields inside __init__ method.
from marshmallow.fields import Integer, String, Raw
from marshmallow import Schema
class ItemDumpLoadSchema(Schema):
item = Raw()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if not (self.only and 'item' not in self.only) and \
not (self.exclude and 'item' in self.exclude):
self.load_fields['item'] = Integer(missing=0)
self.dump_fields['item'] = String()
Usage:
>>> ItemDumpLoadSchema().load({})
{'item': 0}
>>> ItemDumpLoadSchema().dump({'item': 0})
{'item': '0'}
Don't forget to define field in a schema with some field (Raw in my example) - otherwise it may raise an exception in some cases (e.g. using of only and exclude keywords).
OLD SOLUTION
A little perverted one. It based on #prashant-suthar answer. I named load field with suffix _load and implemented #pre_load, #post_load and error handling.
class ArticleSchema(Schema):
id = fields.String()
title = fields.String()
text = fields.String()
class FlowSchema(Schema):
article = fields.Nested(ArticleSchema, dump_only=True)
article_load = fields.Int(load_only=True)
#pre_load
def pre_load(self, data, *args, **kwargs):
if data.get('article'):
data['article_load'] = data.pop('article')
return data
#post_load
def post_load(self, data, *args, **kwargs):
if data.get('article_load'):
data['article'] = data.pop('article_load')
return data
def handle_error(self, exc, data, **kwargs):
if 'article_load' in exc.messages:
exc.messages['article'] = exc.messages.pop('article_load')
raise exc
Why the old solution is not a good solution?
It doesn't allow to inheritate schemas with different handle_error methods defined. And you have to name pre_load and post_load methods with different names.
pass data_key argument to the field definition
Documentation mentions, data_key parameter can be used along with dump_only or load_only to be able to have same field with different functionality.
So you can write your schema as...
class PaymentSchema(Schema):
decimal_money = fields.Decimal(data_key="money", load_only=True)
money = fields.Float(dump_only=True)
This should solve your problem. I am using data_key for similar problem in marshmallow with SQLAlchemyAutoSchema and this fixed my issue.
Edit
Note: The key in ValidationError.messages (error messages) will be decimal_money by default. You may tweak the handle_error method of Schema class to replace decimal_money with money but it is not recommended as you yourself may not be able to differentiate between the error messages fields.
Thanks.
I am using a python package for database managing. The provided class has a method delete() that deletes a record from the database. Before deleting, it asks a user to verify the operation from a console, e.g. Proceed? [yes, No]:
My function needs to perform other actions depending on whether a user chose to delete a record. Can I get user's input requested by the function from the package?
Toy example:
def ModuleFunc():
while True:
a=input('Proceed? [yes, No]:')
if a in ['yes','No']:
#Perform some actions behind a hood
return
This function will wait for one of the two responses and return None once it gets either. After calling this function, can I determine the User's response (without modifying this function)? I think a modification of the Package's source code is not a good idea in general.
Why not just patch the class at runtime? Say you had a file ./lib/db.py defining a class DB like this:
class DB:
def __init__(self):
pass
def confirm(self, msg):
a=input(msg + ' [Y, N]:')
if a == 'Y':
return True
return False
def delete(self):
if self.confirm('Delete?'):
print ('Deleted!')
return
Then in main.py you could do:
from lib.db import DB
def newDelete(self):
if self.confirm('Delete?'):
print('Do some more stuff!')
print('Deleted!')
return
DB.delete = newDelete
test = DB()
test.delete()
See it working here
I would save key events to somewhere(file or memory) with something like Keylogger. Then, you will be able to reuse last one.
However, if you can modify module package 📦 and redistribute, it would be easier.
Return
To
Return a