'spaceship' class is not defined (even though I've imported it?) - python-3.x

I run the main module, which should work correctly. But an error gets returned. 'spaceship' is not defined when I define 's=spaceship(parameters)' why is this I don't get it. I'm using zelle graphics for python. thank you
Functions from main module:
spaceshipGame file
from graphics import *
from spaceshipClass import *
def main():
window=createGraphicsWindow()
runGame(window)
def createGraphicsWindow():
win=GraphWin("Spaceship game",800,800)
return win
def createSpaceship(window,p1,p2,p3,speed,colour):
s=spaceship(window,p1,p2,p3,speed,colour)
return s
def runGame(window):
player=createSpaceship(window,Point(500,500),Point(500,470),Point(520,485),0.5,"red")
player.draw(window)
main()
spaceshipClass file
from spaceshipGame import *
from graphics import *
class spaceship:
def __init__(self,window,p1,p2,p3,speed,colour):
self.p1=p1
self.p2=p2
self.p3=p3
self.speed=speed
self.colour=colour
self.window=window

Never mind, I see the problem. Consult this example for more information:
Simple cross import in python
But the problem is the way you are cross importing, so delete from spaceshipGame import * from spaceshipClass or vise-versa (i.e. delete from spaceshipClass import * from spaceshipGame). You can import individually if you need to like in the example I provided.
There are also many other ways around it if you read the example. One of the easiest would be just merging them in the same file if they need to share a lot of methods.

Related

The best way to share a class between processes

First of all, I'm pretty new in multiprocessing and I'm here for learning of all of you. I have several files doing something similar to this:
SharedClass.py:
class simpleClass():
a = 0
b = ""
.....
MyProcess.py:
import multiprocessing
import SharedClass
class FirstProcess(multiprocessing.Process):
def __init__(self):
multiprocessing.Process.__init__(self)
def modifySharedClass():
# Here I want to modify the object shared with main.py defined in SharedClass.py
Main.py:
from MyProcess import FirstProcess
import sharedClass
if __name__ == '__main__':
pr = FirstProcess()
pr.start()
# Here I want to print the initial value of the shared class
pr.modifySharedClass()
# Here I want to print the modified value of the shared class
I want to define a shared class (in SharedClass.py), in a kind of shared memory that can be readed and writted for both files Main.py and MyProcess.py.
I have try to use the Manager of multiprocessing and multiprocessing.array but Im not having good results, the changes made in one file are not beeing reflected in the other file (maybe Im doing this in the wrong way).
Any ideas? Thank you.

How to pass the production variable to Authorize.Net API?

I am working on getting the transactions on Authorize.Net API.
I am using the same code sample and the SDK says that in order to switch to the production environment, I need to set the environment variable on the controller.
The link is here. I am not sure where should I add this line of code
createtransactioncontroller.setenvironment(constants.PRODUCTION)
Rest of the code is the here
Is this the right way to use the controller
import os
import sys
import imp
from datetime import datetime, timedelta
from authorizenet import apicontractsv1
from authorizenet.apicontrollers import getSettledBatchListController
from authorizenet.apicontrollers import createTransactionController
constants = imp.load_source('modulename', 'constants.py')
def get_settled_batch_list():
"""get settled batch list"""
createTransactionController.setenvironment(constants.PRODUCTION)
merchantAuth = apicontractsv1.merchantAuthenticationType()
I had this same error and the way I fixed it was I changed the file constants.py to credentials.py and then I changed the variable to MY_CONSTANTS but you can change them to be credentials if you want.
If it does doesn't work at that point you could try to hard code it instead with createtransactioncontroller.setenvironment('https://api2.authorize.net/xml/v1/request.api')
but if you don't then leave it to be constants.PRODUCTION
createtransactioncontroller = createTransactionController(createtransactionrequest)
createtransactioncontroller.setenvironment(constants.PRODUCTION)
# or createtransactioncontroller.setenvironment('https://api2.authorize.net/xml/v1/request.api')
createtransactioncontroller.execute()
I used a dictionary for my credentials(constants in your case) so mine looks a little different.
import imp
import os
import sys
import importlib
from authorizenet.constants import constants
from authorizenet import apicontractsv1
from authorizenet.apicontrollers import createTransactionController
from .credentials import MY_CONSTANTS
# retrieved from the constants file
merchantAuth = apicontractsv1.merchantAuthenticationType()
merchantAuth.name = MY_CONSTANTS['apiLoginId']
merchantAuth.transactionKey = MY_CONSTANTS['transactionKey']
I hope this helped you.

Replace package import in a module

I use a module that imports a function as a package import using relative import dot notation:
from .utils import target_func
class ClassINeed:
def function_i_call(self):
return target_func()
I want to import ClassINeed with from classineed import ClassINeed but replace target_func with a function of my own. Problem is, target_func is not part of the class I am importing. Therefore I do not see a way to access it. What would be a way to accomplish this?
On top of from classineed import ClassINeed, also do a import classineed then override the target_func as needed via classineed.target_func = lambda : 'hello!' for example.
P.S. Referring to the class ClassINeed with classineed.ClassINeed might be cleaner if you already have import classineed.

OpenMDAO 1.x: recording in parallel

When running an analysis under MPI with distributed components in a ParallelGroup, I get an error when adding a DumpRecorder to the analysis. Below is a small example that demonstrates this (this was run with the latest master branch commit aaa67a4d51f4081e9e41b250b0a76b077f6f0c21 from 28/10/2015):
import numpy as np
from openmdao.core.mpi_wrap import MPI
from openmdao.api import Component, Group, DumpRecorder, Problem, ParallelGroup
class Sliced(Component):
def __init__(self):
super(Sliced, self).__init__()
self.add_param('x', 0.)
self.add_output('y', 0.)
def solve_nonlinear(self, params, unknowns, resids):
unknowns['y'] = params['x'] * 2.
class VectorComp(Component):
def __init__(self, size):
super(VectorComp, self).__init__()
self.add_param('xin', np.zeros(size))
self.add_output('x', np.zeros(size))
def solve_nonlinear(self, params, unknowns, resids):
unknowns['x'] = params['xin'] * 2.
class Analysis(Group):
def __init__(self, size):
super(Analysis, self).__init__()
self.add('v', VectorComp(size), promotes=['*'])
par = self.add('par', ParallelGroup())
for i in range(size):
par.add('sec%02d' % i, Sliced())
self.connect('x', 'par.sec%02d.x' % i, src_indices=[i])
if __name__ == '__main__':
if MPI:
from openmdao.core.petsc_impl import PetscImpl as impl
else:
from openmdao.core.basic_impl import BasicImpl as impl
p = Problem(impl=impl, root=Analysis(4))
recorder = DumpRecorder('optimization.log')
# adding specific includes works, but leaving it out results in a crash
# recorder.options['includes'] = ['x']
p.driver.add_recorder(recorder)
p.setup()
p.run()
The error which is raised is:
RuntimeError: Cannot access remote Variable 'par.sec00.x' in this process.
I see that the recorder dumps a file per processor, so shouldn't the BaseRecorder._filter_vectors method filter out params not present on a specific processor? I'm not yet familiar enough with the code to propose a fix, so I hope the OpenMDAO devs can easily figure out what goes wrong.
Manually specifying the includes works since the Sliced parameters are then excluded, but it would be nice that this was not necessary, and dealt with under the hood.
I also want to let you guys know how excited we are about the new framework. It is so much faster that the 0.x version, and the parallel FD feature is much appreciated and works like a charm!
There were some recent changes that broke the dump recorder in parallel. We put a story up for someone to fix it, but in the meantime, you might want to try the SqliteRecorder recorder. It's what I have been using for performance testing on CADRE. You set it up the same way, but then to read the values using an sqlitedict. There is a small example in the docs, but a more practical example is here in the CADRE code:
https://github.com/OpenMDAO/CADRE/blob/master/plot_progress.py

Which form of relative import to prefer inside a package

I'm writing a library named Foo for an example.
The __init__.py file:
from .foo_exceptions import *
from .foo_loop import FooLoop()
main_loop = FooLoop()
from .foo_functions import *
__all__ = ['main_loop'] + foo_exceptions.__all__ + foo_functions.__all__
When installed, it can be used like this:
# example A
from Foo import foo_create, main_loop
foo_obj = foo_create()
main_loop().register(foo_obj)
or like this:
# example B
import Foo
foo_obj = Foo.foo_create()
Foo.main_loop().register(foo_obj)
I clearly prefer the example B approach. No name conflicts and the source of each external object is explicitely stated.
So much for introduction, now my question. Inside this library I need to import something from a different file. Again, I have several ways to do it. And the question is which style to prefer - C, D or E? Read below.
# example C
from . import foo_exceptions
raise foo_exceptions.FooError("fail")
or
# example D
from .foo_exceptions import FooError
raise FooError("fail")
or
# example E
from . import FooError
raise FooError("fail")
Approach C has the disadvantage, that importing a whole module instead of importing just a few required objects increases the chance of a cyclical import problem. Also consider this line:
from . import foo_exceptions, main_loop
It looks like an import of 2 symbols from one source, but it isn't. The former (foo_exceptions) is a module (.py file) in the current directory and the latter is an object defined in __init__.py.
That's why I'm not using style C and the question in its final form is: D or E (and why)?
(Thank you for reading this long question. All code fragments are examples only and may contain typos)
After the answer from alexanderlukanin:
EDIT1: corrected errors in init.py
NOTE1: foo_ prefixes are only to emphasize the relationship between objects
EDIT2: When importing an object which is not part of the library interface, style E is not usable. I think we have a winner: It's the from .module import symbol form.
Don't use old-style relative imports:
# Import from foo/foo_loop.py
# This DOES NOT WORK in Python 3
# and MAY NOT WORK AS EXPECTED in Python 2
from foo_loop import FooLoop
# This is reliable and unambiguous
from .foo_loop import FooLoop
Don't use asterisk import unless you really have to.
# Namespace pollution! Name clashes!
from .submodule import *
Don't use prefixes - you've got namespaces exactly for that purpose.
# Unpythonic
from foo import foo_something_create
foo_something_create()
# Pythonic
import foo.something
foo.something.create()
Your package's API must be well-defined. Your implementation must not be too tangled. The rest is a matter of taste.
# [C] This is good.
# Import order: __init__.py, exceptions.py
from . import exceptions
raise exceptions.FooError
# [D] This is also fine.
# Import order is the same as above,
# only name binding inside the current module is different.
from .exceptions import FooError
raise FooError
# [E] This is not as good because it adds one unnecessary level of indirection
# submodule.py -> __init__.py -> exceptions.py
from . import FooError
raise FooError
See also: Circular (or cyclic) imports in Python

Resources