I want to import asyncore from a different directory, because I need to make some changes to how asyncore works, and don't want to modify the base file.
I could include it in the folder with my script, but after putting all the modules I need there it ends up getting rather cluttered.
I'm well aware of making a sub directory and putting a blank __init__.py file in it. This doesn't work. I'm not exactly sure what happens, but when I import asyncore from a sub directory, asyncore just plain stops working. Specifically; the connect method doesn't get run at all, even though I'm calling it. Moving asyncore to the main directory and importing it normally removes this problem.
I skimmed down my code significantly, but this still has the same problem:
from Modules import asyncore
from Modules import asynchat
from Modules import socket
class runBot(asynchat.async_chat, object):
def __init__(self):
asynchat.async_chat.__init__(self)
self.connect_to_twitch()
def connect_to_twitch(self):
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect(('irc.chat.twitch.tv',6667))
self.set_terminator('\n')
self.buffer=[]
def collect_incoming_data(self, data):
self.buffer.append(data)
def found_terminator(self):
msg = ''.join(self.buffer)
print(msg)
if __name__ == '__main__':
# Assign bots to channels
bot = runBot()
# Start bots
asyncore.loop(0.001)
I'm sure this is something really simple I'm overlooking, but I'm just not able to figure this out.
Use sys.path.append -- see https://docs.python.org/3/tutorial/modules.html for the details.
Update: Try to put a debug print to the beginning and end of sources of your modules to see whether they are imported as expected. You can also print __file__ attribute for the module/object that you want to use to see, whether you imported what you expected -- like:
import re
#...
print(re.__file__)
Related
I'm bit confused about how the global variables work. I have a large project, with around 50 files, and I need to define global variables for all those files.
What I did was define them in my projects main.py file, as following:
# ../myproject/main.py
# Define global myList
global myList
myList = []
# Imports
import subfile
# Do something
subfile.stuff()
print(myList[0])
I'm trying to use myList in subfile.py, as following
# ../myproject/subfile.py
# Save "hey" into myList
def stuff():
globals()["myList"].append("hey")
An other way I tried, but didn't work either
# ../myproject/main.py
# Import globfile
import globfile
# Save myList into globfile
globfile.myList = []
# Import subfile
import subfile
# Do something
subfile.stuff()
print(globfile.myList[0])
And inside subfile.py I had this:
# ../myproject/subfile.py
# Import globfile
import globfile
# Save "hey" into myList
def stuff():
globfile.myList.append("hey")
But again, it didn't work. How should I implement this? I understand that it cannot work like that, when the two files don't really know each other (well subfile doesn't know main), but I can't think of how to do it, without using io writing or pickle, which I don't want to do.
The problem is you defined myList from main.py, but subfile.py needs to use it. Here is a clean way to solve this problem: move all globals to a file, I call this file settings.py. This file is responsible for defining globals and initializing them:
# settings.py
def init():
global myList
myList = []
Next, your subfile can import globals:
# subfile.py
import settings
def stuff():
settings.myList.append('hey')
Note that subfile does not call init()— that task belongs to main.py:
# main.py
import settings
import subfile
settings.init() # Call only once
subfile.stuff() # Do stuff with global var
print settings.myList[0] # Check the result
This way, you achieve your objective while avoid initializing global variables more than once.
See Python's document on sharing global variables across modules:
The canonical way to share information across modules within a single program is to create a special module (often called config or cfg).
config.py:
x = 0 # Default value of the 'x' configuration setting
Import the config module in all modules of your application; the module then becomes available as a global name.
main.py:
import config
print (config.x)
In general, don’t use from modulename import *. Doing so clutters the importer’s namespace, and makes it much harder for linters to detect undefined names.
You can think of Python global variables as "module" variables - and as such they are much more useful than the traditional "global variables" from C.
A global variable is actually defined in a module's __dict__ and can be accessed from outside that module as a module attribute.
So, in your example:
# ../myproject/main.py
# Define global myList
# global myList - there is no "global" declaration at module level. Just inside
# function and methods
myList = []
# Imports
import subfile
# Do something
subfile.stuff()
print(myList[0])
And:
# ../myproject/subfile.py
# Save "hey" into myList
def stuff():
# You have to make the module main available for the
# code here.
# Placing the import inside the function body will
# usually avoid import cycles -
# unless you happen to call this function from
# either main or subfile's body (i.e. not from inside a function or method)
import main
main.mylist.append("hey")
Using from your_file import * should fix your problems. It defines everything so that it is globally available (with the exception of local variables in the imports of course).
for example:
##test.py:
from pytest import *
print hello_world
and:
##pytest.py
hello_world="hello world!"
Hai Vu answer works great, just one comment:
In case you are using the global in other module and you want to set the global dynamically, pay attention to import the other modules after you set the global variables, for example:
# settings.py
def init(arg):
global myList
myList = []
mylist.append(arg)
# subfile.py
import settings
def print():
settings.myList[0]
# main.py
import settings
settings.init("1st") # global init before used in other imported modules
# Or else they will be undefined
import subfile
subfile.print() # global usage
Your 2nd attempt will work perfectly, and is actually a really good way to handle variable names that you want to have available globally. But you have a name error in the last line. Here is how it should be:
# ../myproject/main.py
# Import globfile
import globfile
# Save myList into globfile
globfile.myList = []
# Import subfile
import subfile
# Do something
subfile.stuff()
print(globfile.myList[0])
See the last line? myList is an attr of globfile, not subfile. This will work as you want.
Mike
I just came across this post and thought of posting my solution, just in case of anyone being in the same situation as me, where there are quite some files in the developed program, and you don't have the time to think through the whole import sequence of your modules (if you didn't think of that properly right from the start, such as I did).
In such cases, in the script where you initiate your global(s), simply code a class which says like:
class My_Globals:
def __init__(self):
self.global1 = "initial_value_1"
self.global2 = "initial_value_2"
...
and then use, instead of the line in the script where you initiated your globals, instead of
global1 = "initial_value_1"
use
globals = My_Globals()
I was then able to retrieve / change the values of any of these globals via
globals.desired_global
in any script, and these changes were automatically also applied to all the other scripts using them. All worked now, by using the exact same import statements which previously failed, due to the problems mentioned in this post / discussion here. I simply thought of global object's properties being changing dynamically without the need of considering / changing any import logic, in comparison to simple importing of global variables, and that definitely was the quickest and easiest (for later access) approach to solve this kind of problem for me.
Based on above answers and links within I created a new module called global_variables.py:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# ==============================================================================
#
# global_variables.py - Global variables shared by all modules.
#
# ==============================================================================
USER = None # User ID, Name, GUID varies by platform
def init():
""" This should only be called once by the main module
Child modules will inherit values. For example if they contain
import global_variables as g
Later on they can reference 'g.USER' to get the user ID.
"""
global USER
import getpass
USER = getpass.getuser()
# End of global_variables.py
Then in my main module I use this:
import global_variables as g
g.init()
In another child imported module I can use:
import global_variables as g
# hundreds of lines later....
print(g.USER)
I've only spent a few minutes testing in two different python multiple-module programs but so far it's working perfectly.
Namespace nightmares arise when you do from config import mySharedThing. That can't be stressed enough.
It's OK to use from in other places.
You can even have a config module that's totally empty.
# my_config.py
pass
# my_other_module.py
import my_config
def doSomething():
print(my_config.mySharedThing.message)
# main.py
from dataclasses import dataclass
from my_other_module import doSomething
import my_config
#dataclass
class Thing:
message: str
my_config.mySharedThing = Thing('Hey everybody!')
doSomething()
result:
$ python3 main.py
Hey everybody!
But using objects you pulled in with from will take you down a path of frustration.
# my_other_module.py
from my_config import mySharedThing
def doSomething():
print(mySharedThing.message)
result:
$ python3 main.py
ImportError: cannot import name 'mySharedThing' from 'my_config' (my_config.py)
And maybe you'll try to fix it like this:
# my_config.py
mySharedThing = None
result:
$ python3 main.py
AttributeError: 'NoneType' object has no attribute 'message'
And then maybe you'll find this page and try to solve it by adding an init() method.
But the whole problem is the from.
I'm struggling to refactor some working import-hook-functionality that served us very well on Python 2 the last years... And honestly I wonder if something is broken in Python 3? But I'm unable to see any reports of that around so confidence in doing something wrong myself is still stronger! Ok. Code:
Here is a cooked down version for Python 3 with PathFinder from importlib.machinery:
import sys
from importlib.machinery import PathFinder
class MyImporter(PathFinder):
def __init__(self, name):
self.name = name
def find_spec(self, fullname, path=None, target=None):
print('MyImporter %s find_spec fullname: %s' % (self.name, fullname))
return super(MyImporter, self).find_spec(fullname, path, target)
sys.meta_path.insert(0, MyImporter('BEFORE'))
sys.meta_path.append(MyImporter('AFTER'))
print('sys.meta_path:', sys.meta_path)
# import an example module
import json
print(json)
So you see: I insert an instance of the class right in front and one at the end of sys.meta_path. Turns out ONLY the first one triggers! I never see any calls to the last one. That was different in Python 2!
Looking at the implementation in six I thought, well THEY need to know how to do this properly! ... 🤨 I don't see this working either! When I try to step in there or just put some prints... Nada!
After all:IF I actually put my Importer first in the sys.meta_path list, trigger on certain import and patch my module (which all works fine) It still gets overridden by the other importers in the list!
* How can I prevent that?
* Do I need to do that? It seems dirty!
I have been heavily studying the meta_path in Python3.8
The entire import mechanism has been moved from C to Python and manifests itself as sys.meta_path which contains 3 importers. The Python import machinery is cleverly stupid. i.e. uncomplex.
While the source code of the entire python import is to be found in importlib/
meta_path[1] pulls the importer from frozen something: bytecode=?
underscore import is still the central hook called when you "import mymod"
--import--() first checks if the module has already been imported in which case it retrieves it from sys.modules
if that doesn't work it calls find_spec() on each "spec finder" in meta_path.
If the "spec finder" is successful it return a "spec" needed by the next stage
If none of them find it, import fails
sys.meta_path is an array of "spec finders"
0: is the builtin spec finder: (sys, _sre)
1: is the frozen import lib: It imports the importer (importlib)
2: is the path finder and it finds both library modules: (os, re, inspect)
and your application modules based on sys.path
So regarding the question above, it shouldn't be happening. If your spec finder is first in the meta_path and it returns a valid spec then the module is found, and remaining entries in sys.meta_path won't even be asked.
I'm developing a small app using kivy and python3.6 (I'm still a beginner). I'm planning to separate the code in different files for clarity, however I have encountered a problem in a specific situation. I have made minimal working example to illustrate.
I have the following files:
main.py
main.kv
module.py
module.kv
Here a minimal code:
main.py:
from kivy.app import App
from kivy.uix.button import Button
from kivy.lang import Builder
import module
Builder.load_file('module.kv')
class MainApp(App):
pass
def function():
print('parent function')
if __name__ == '__main__':
MainApp().run()
main.kv:
CallFunction
module.py:
from kivy.uix.button import Button
class CallFunction(Button):
def call_function(self):
from main import function
function()
module.kv:
<CallFunction>:
id : parent_button
text: 'Call parent button'
on_press: self.call_function()
So the problem is that when I run this code, I receive a warning
The file /home/kivy/python_exp/test/module.kv is loaded multiples times, you might have unwanted behaviors.
What works:
If the function I want to call is part of the main app class, there is no problem
If the function is part of the module.py there is no problem
If the function is part of another module, there is no problem
What doesn't work
I cannot call a function which is in the main.py. If I use the import the function as the beginning of module.py, kivy has a weird behavior and call everything twice. Calling within this call_function allows to have a proper interface, but I get the warning that the file has been loaded multiple time.
There are easy workarounds, I'm well aware of that, so it's more about curiosity and understanding better how the imports in kivy works. Is there a way to make it work?
I wanted to use the main.py to initialize different things at the startup of the app. In particular I wanted to create an instance of another class (not a kivy class) in the main.py and when clicking on the button on the interface, calling a method on this instance.
Thanks :)
When you import something from another python module the python virtual machine execute this module. In the call_function you import function from the main file so everytime you press the CallFunction the module.kv is loaded.
To solve this it is recommended to include the other kv files in your main kv file.
You can also move the import statement from the method to the top of the module file.
Your kv file is loaded twice because the code is executed twice. This is due to how pythons module system works and kivy just realized that loading the kv twice is probably not what you want.
Generally python objects live in a namespace. So when a function in the module foo looks up a variable the variable is searched in the namespace of the module. That way if you define two variables foo.var and bar.var (in the modules foo and bar resp.) they don't clash and get confused for each other.
The tricky thing is that the python file you execute is special: It does not create a module namespace but the __main__ namespace. Thus if you import the file you are executing as __main__ it will create a whole new namespace with new objects and execute the module code. If you import a module that was already imported in the current session the module code is not executed again, but the namespace already created is made available. You don't even need two files for that, put the following in test.py:
print("hello!")
print(__name__)
import test
If you now execute python test.py you will see two hello! and once __main__ and once test.
You can find more information on namespaces and how variable lookups works in python in the documentation.
Also if your function actually does some work and mutates an object that lives in main.py you might want to rethink the information flow. Often it is a good idea to bind the state and functions working on them together in classes and passing the objects then where they are called i.e. to CallFunction in your example.
i've seen plenty of importing questions but didn't find any that explained importing very "easily". there are are 3 types of importing that i know of, and none of them seem to do what i'm looking for or would like to do. eg. main.py has def a() def b() class c() and def d() and lets say i have a start.py file.
main:
def a():
print("1")
def b():
print("2")
class c():
def__init__(self,name = "Rick")
self.name = name
def d():
print("4")
so now im my start.py file i want to import everything from them. what is the best way? i have tried using import main and i run into issues after creating an instance of class c [ ricky = c() ]that ricky isn't defined or accessing ricky.name it will say module ricky has no attribute name. so that doesn't seem to work. what is this even used for if you aren't importing the entire main.py file?
then there is from main import a, b, c, d that seems to work just fine, but there really has to be another way than having to import every single function, definition, variable, and everything.
third there is from main import * i'm not sure how this one works, i have read some on it mainly stating there should be an __ all __ = everything i want imported. but where do i put this. at the very top of the page inside my main.py? but there still should be a better way?
is my import main just not working correctly? or do i have to list everything i want to import either in a from main import statement or in an __ all __ list?
does importing carry over to another py file? eg. 1.py 2.py 3.py if inside 2.py i import 3.py correctly and everything works. in 1.py can i just import 2.py and it will import 3.py into 1.py from the import statement inside of 2.py? or do i have to import 2.py and 3.py again into 1.py?
the 3 main imports:
import (pythonfile) aka "module" using this will import all classes and functions to be used. does not import other imports. to call something in the imported module eg. modlue: MAIN, function: FUNC ... to call: MAIN.FUNC()
from module import FUNC, CLASS, .... when using this import you don't need to call it with the module. it is almost as if it is right infront of you eg.
module: MAIN, function: FUNC ..... to call: FUNC()
from module import * a combination of the previous two imports. will import everything from the module to be accessed without calling with the module extention. this form imports other imports as well. so if you have two modules that need to talk to eachother. using this will cause an error since you will be trying to import a module into another module then back into it's self again. A imported into B, A and B imported back into A. it doesn't work. so watch when using. May cause other importing errors of trying to import multiple modules that share imports. eg. importing A and B into C if A and B share D (pending for testing)
from MAIN import * call function: FUNC()
hope this helps other people out who are having issues understanding exactly how importing works and calling your functions/classes and what not.
I have a (python3) package that has completely different behaviour depending on how it's init()ed (perhaps not the best design, but rewriting is not an option). The module can only be init()ed once, a second time gives an error. I want to test this package (both behaviours) using py.test.
Note: the nature of the package makes the two behaviours mutually exclusive, there is no possible reason to ever want both in a singular program.
I have serveral test_xxx.py modules in my test directory. Each module will init the package in the way in needs (using fixtures). Since py.test starts the python interpreter once, running all test-modules in one py.test run fails.
Monkey-patching the package to allow a second init() is not something I want to do, since there is internal caching etc that might result in unexplained behaviour.
Is it possible to tell py.test to run each test module in a separate python process (thereby not being influenced by inits in another test-module)
Is there a way to reliably reload a package (including all sub-dependencies, etc)?
Is there another solution (I'm thinking of importing and then unimporting the package in a fixture, but this seems excessive)?
To reload a module, try using the reload() from library importlib
Example:
from importlib import reload
import some_lib
#do something
reload(some_lib)
Also, launching each test in a new process is viable, but multiprocessed code is kind of painful to debug.
Example
import some_test
from multiprocessing import Manager, Process
#create new return value holder, in this case a list
manager = Manager()
return_value = manager.list()
#create new process
process = Process(target=some_test.some_function, args=(arg, return_value))
#execute process
process.start()
#finish and return process
process.join()
#you can now use your return value as if it were a normal list,
#as long as it was assigned in your subprocess
Delete all your module imports and also your tests import that also import your modules:
import sys
for key in list(sys.modules.keys()):
if key.startswith("your_package_name") or key.startswith("test"):
del sys.modules[key]
you can use this as a fixture by configuring on your conftest.py file a fixture using the #pytest.fixture decorator.
Once I had similar problem, quite bad design though..
#pytest.fixture()
def module_type1():
mod = importlib.import_module('example')
mod._init(10)
yield mod
del sys.modules['example']
#pytest.fixture()
def module_type2():
mod = importlib.import_module('example')
mod._init(20)
yield mod
del sys.modules['example']
def test1(module_type1)
pass
def test2(module_type2)
pass
The example/init.py had something like this
def _init(val):
if 'sample' in globals():
logger.info(f'example already imported, val{sample}' )
else:
globals()['sample'] = val
logger.info(f'importing example with val : {val}')
output:
importing example with val : 10
importing example with val : 20
No clue as to how complex your package is, but if its just global variables, then this probably helps.
I have the same problem, and found three solutions:
reload(some_lib)
patch SUT, as the imported method is a key and value in SUT, you can patch the
SUT. Example, if you use f2 of m2 in m1, you can patch m1.f2 instead of m2.f2
import module, and use module.function.