sqlAlchemy with multiple relationship and files - python-3.x

Helo everyone,
We are currently working with SQLAclchemy and Flask on a web service. As part of this work we have a class A in file A.py:
from common import db
class A(db.Model):
...
b = relationship('B')
c = relationship('C')
...
B and C are located in two distinct files (b.py and c.py).
The current eanpoint we create needs only A and B classes, but we are forced to include C:
from common.a import A
from common.b import B
from common.c import C
class NewResource(Resource):
def get(self):
# do something with A and A.b
If I remove the import of C I get:
sqlalchemy.exc.InvalidRequestError: One or more mappers failed to
initialize - can't proceed with initialization of other mappers.
Triggering mapper: 'Mapper|A|c'. Original exception was: When
initializing mapper Mapper|A|c, expression 'C' failed to locate a name
("name 'C' is not defined"). If this is a class name, consider adding
this relationship() to the class after both
dependent classes have been defined.
We are also looking at the whole idea of seperating the class at all.
thanks for the help.

Related

What library or framework can be used to parse Python file and extract base classes as well as methods inside them

class A:
def m1():
// in a.m1
pass
class B(A):
def m2():
// in b.m2
Parsing above code shall give me following info -
class names - B, base -> A, method names -> a.m1, b.m2
I have looked into Jedi, but I don't see any Api to extract above information.

The best way to share a class between processes

First of all, I'm pretty new in multiprocessing and I'm here for learning of all of you. I have several files doing something similar to this:
SharedClass.py:
class simpleClass():
a = 0
b = ""
.....
MyProcess.py:
import multiprocessing
import SharedClass
class FirstProcess(multiprocessing.Process):
def __init__(self):
multiprocessing.Process.__init__(self)
def modifySharedClass():
# Here I want to modify the object shared with main.py defined in SharedClass.py
Main.py:
from MyProcess import FirstProcess
import sharedClass
if __name__ == '__main__':
pr = FirstProcess()
pr.start()
# Here I want to print the initial value of the shared class
pr.modifySharedClass()
# Here I want to print the modified value of the shared class
I want to define a shared class (in SharedClass.py), in a kind of shared memory that can be readed and writted for both files Main.py and MyProcess.py.
I have try to use the Manager of multiprocessing and multiprocessing.array but Im not having good results, the changes made in one file are not beeing reflected in the other file (maybe Im doing this in the wrong way).
Any ideas? Thank you.

Loading a class of unknown name in a dynamic location

Currently I am extracting files to the temp directory of the operating system. One of the files is a Python file containing a class which I need to get a handle of. The Python's file is known, but the name of the class inside the file is unknown. But it is safe to assume, that the there is only one single class, and that the class is a subclass of another.
I tried to work with importlib, but I am not able to get a handle of the class.
So far I tried:
# Assume
# module_name contains the name of the class and -> "MyClass"
# path_module contains the path to the python file -> "../Module.py"
spec = spec_from_file_location(module_name, path_module)
module = module_from_spec(spec)
for pair in inspect.getmembers(module):
print(f"{pair[1]} is class: {inspect.isclass(pair[1])}")
When I iterate over the members of the module, none of them get printed as a class.
My class in this case is called BasicModel and the Output on the console looks like this:
BasicModel is class: False
What is the correct approach to this?
Edit:
As the content of the file was requested, here you go:
class BasicModel(Sequential):
def __init__(self, class_count: int, input_shape: tuple):
Sequential.__init__(self)
self.add(Input(shape=input_shape))
self.add(Flatten())
self.add(Dense(128, activation=nn.relu))
self.add(Dense(128, activation=nn.relu))
self.add(Dense(class_count, activation=nn.softmax))
Use dir() to get the attributes of the file and inspect to check if the attribute is a class. If so, you can create an object.
Assuming that your file's path is /tmp/mysterious you can do this:
import importlib
import inspect
from pathlib import Path
import sys
path_pyfile = Path('/tmp/mysterious.py')
sys.path.append(str(path_pyfile.parent))
mysterious = importlib.import_module(path_pyfile.stem)
for name_local in dir(mysterious):
if inspect.isclass(getattr(mysterious, name_local)):
print(f'{name_local} is a class')
MysteriousClass = getattr(mysterious, name_local)
mysterious_object = MysteriousClass()

Importing a module causes an error, but separating them to two and then importing doesn't, why?

I'm trying to find out for myself how I could work around the problem I recently asked here and I come across a potential solution, but I honestly don't understand why it works, and why the other doesn't.
For context, model requires variables a and b to be defined before being successfully loaded and defined in a module. Otherwise, it throws an error: NameError: name 'a' is not defined.
Starting off with model.py:
import pickle
from tensorflow import keras
# loads and returns the variables needed by model
def load_model_vars():
return pickle.load(open('./file.pkl', 'rb'))
# loads and returns the model
def load_model():
return keras.models.load_model('./model.h5')
Now to minimally reproduce and identify the problem I created a new module, foo.py:
from model import load_model_vars, load_model
# goal here is to supposedly expose only the model to other modules
a, b = load_model_vars()
globals()['model'] = load_model()
I then created another module to import foo.py into, let's name it bar.py:
import foo
# just checks if the model is defined
foo.model.summary()
Which for some reason throws the formerly mentioned NameError. Why? The variables are defined, it was executed in order(load variables first, then model), and even if I change a, b to globals()['a'], globals()['b'], import foo to from foo import * or from foo import a, b or even combinations of any of these, it always arrives into this error.
But when I introduce another module, say, baz.py, that contains these two lines:
from model import load_model_vars
a, b = load_model_vars()
Then import it to bar.py:
from baz import a, b
import foo
# just checks if the model is defined
foo.model.summary()
With foo.py unchanged, or with a, b = load_model_vars() commented out:
from model import load_model_vars, load_model
# goal here is to supposedly expose only the model to other modules
# a, b = load_model_vars()
globals()['model'] = load_model()
It successfully loads the freaking model! Why? What's this sorcery underneath the import function? What actually happens under the hood?

Which form of relative import to prefer inside a package

I'm writing a library named Foo for an example.
The __init__.py file:
from .foo_exceptions import *
from .foo_loop import FooLoop()
main_loop = FooLoop()
from .foo_functions import *
__all__ = ['main_loop'] + foo_exceptions.__all__ + foo_functions.__all__
When installed, it can be used like this:
# example A
from Foo import foo_create, main_loop
foo_obj = foo_create()
main_loop().register(foo_obj)
or like this:
# example B
import Foo
foo_obj = Foo.foo_create()
Foo.main_loop().register(foo_obj)
I clearly prefer the example B approach. No name conflicts and the source of each external object is explicitely stated.
So much for introduction, now my question. Inside this library I need to import something from a different file. Again, I have several ways to do it. And the question is which style to prefer - C, D or E? Read below.
# example C
from . import foo_exceptions
raise foo_exceptions.FooError("fail")
or
# example D
from .foo_exceptions import FooError
raise FooError("fail")
or
# example E
from . import FooError
raise FooError("fail")
Approach C has the disadvantage, that importing a whole module instead of importing just a few required objects increases the chance of a cyclical import problem. Also consider this line:
from . import foo_exceptions, main_loop
It looks like an import of 2 symbols from one source, but it isn't. The former (foo_exceptions) is a module (.py file) in the current directory and the latter is an object defined in __init__.py.
That's why I'm not using style C and the question in its final form is: D or E (and why)?
(Thank you for reading this long question. All code fragments are examples only and may contain typos)
After the answer from alexanderlukanin:
EDIT1: corrected errors in init.py
NOTE1: foo_ prefixes are only to emphasize the relationship between objects
EDIT2: When importing an object which is not part of the library interface, style E is not usable. I think we have a winner: It's the from .module import symbol form.
Don't use old-style relative imports:
# Import from foo/foo_loop.py
# This DOES NOT WORK in Python 3
# and MAY NOT WORK AS EXPECTED in Python 2
from foo_loop import FooLoop
# This is reliable and unambiguous
from .foo_loop import FooLoop
Don't use asterisk import unless you really have to.
# Namespace pollution! Name clashes!
from .submodule import *
Don't use prefixes - you've got namespaces exactly for that purpose.
# Unpythonic
from foo import foo_something_create
foo_something_create()
# Pythonic
import foo.something
foo.something.create()
Your package's API must be well-defined. Your implementation must not be too tangled. The rest is a matter of taste.
# [C] This is good.
# Import order: __init__.py, exceptions.py
from . import exceptions
raise exceptions.FooError
# [D] This is also fine.
# Import order is the same as above,
# only name binding inside the current module is different.
from .exceptions import FooError
raise FooError
# [E] This is not as good because it adds one unnecessary level of indirection
# submodule.py -> __init__.py -> exceptions.py
from . import FooError
raise FooError
See also: Circular (or cyclic) imports in Python

Resources