Using Library inspect to list functions in a package - python-3.5

I am trying to get the list of functions in a package.
The package contains some internal functions that I would like to exclude.
For example this is how my package looks
def _test():
pass
def printNames():
pass
def returnSum():
pass
What I am trying to do is get the members as a list: [printNames, returnSum]
Code:
inspect.getmembers(package, inspect.isfunction)
What this returns to me instead is [printNames, returnSum, _test]
All internal functions start with an underscore '_'.
How can I exclude such functions?

Related

Python function global variable not finding it if function is imported

Considering this script
import pandas as pd
import os
df=pd.DataFrame({"a":[1,2,3],"b":[3,4,5]})
def a():
b()
def b():
print(df.head(1))
a()
When we run it, the output is a message in console that we would expect from print(df.head(1)) Here I got the output from VSC but even if we run it in terminal we get the same output.
So on that script, even if df was not locally defined inside of function b(), it went out to the global environment and fetched df. That's what I expected.
But if I source function a() & b() differently, then it can't find df in the global environment. Here's an example:
Consider two .py files, the first one is this one C:/Users/someone/some_repo/test_scope_functions.py:
def a():
b()
def b():
print(df.head(1))
Then, we have this second one C:/Users/someone/some_repo/test.py
import pandas as pd
import os
os.chdir('C:/Users/someone/some_repo')
df=pd.DataFrame({"a":[1,2,3],"b":[3,4,5]})
from test_scope_functions import a
a()
So when I go to the terminal and run python3 C:/Users/someone/some_repo/test.py I get an error that df is not defined, and the error is coming from inside function b(). I have also tried this on VSC and same result.
So my guess is that if the functions are being imported, they are not able to link up their locally used variables to the global environment that is importing those functions.
I need the import method to work. That means, my ask is:
Make imported functions employing variables that are not defined within them to fetch their value from the global environment's variables (supossing they exist in the global environment). Where by global environment I mean the frame where we imported those functions using the code from <place> import <function>
In order words. If I have a script that defines variable dummy as a number with value 1, and then on that same script I from library import function_test where function_test() doesn't defines dummy in it, but uses dummy in it's definition. Then I need for function_test() to use dummy as 1 since that's the value it has on the script that imported the function.

Creating SequenceTaggingDataset from list, not file

I would like to create a SequenceTaggingDataset from two lists that I have created dynamically inside my code - train_sentences and train_tags. I would want to write something like this:
train_data = SequenceTaggingDataset(examples=(zip(train_sentences, train_tags)))
However, the constructor must receive a path. And not only that - it looks from the code as though, even if I were to provide the examples, it will override those, and initialize examples to be an empty list.
For various reasons, I do not want to save the lists I created in a file from which the SequenceTaggingDataset could read. Is there any way around this, save defining my own custom class?
You will need to modify source code for it (https://pytorch.org/text/_modules/torchtext/datasets/sequence_tagging.html#SequenceTaggingDataset). You can make a local copy and import as your module.
path is used in __init__. The important part is that it takes lines from file and splits it using given separator into list named columns. Then this columns list is being fed into another class method together with fields to construct examples list. Please read provided example here to understand fields (Note that UDPOS is called there to create SequenceTaggingDataset).
What you need is columns, which you don't need to read from file as you have all components already. You will feed it directly by simplifying class __init__:
def __init__(self, columns, fields, encoding="utf-8", separator="\t", **kwargs):
examples = []
examples.append(data.Example.fromlist(columns, fields))
super(SequenceTaggingDataset, self).__init__(examples, fields,
**kwargs)
columns is nested list of lists: [[word], [UD_TAG], [PTB_TAG]]. It means that you need to feed following into modified class:
train = SequenceTaggingDataset([train_sentences, train_tags], fields=...)

External function with global variables

I want to write functions in external files, as it is more convenient for edition, and use global variables.
Apparently the only way to do that is to use import some_function from some_file (right?). Is it possible to still use global variables in this way? That is variables declared in the main file and directly accessible in the external file? I also try to avoid passing them in arguments as it complicates the code. I was thinking about some "include" instruction but I'm not sure it exists in Python.
So the code in the main file would be this:
from test import test
x=1
test()
and in the file test.py it would be this:
def test():
global x
print(x)
Maybe this is just a problem of having the right editor...Has anyone a recommandation for MacOS?
python's import is pretty much equivalent to include in other languages, especially in the form of from some_file import * which imports all of the namespace, including functions, classes, and all global variables in that module or package.
Edit: However, if you want to do what you requested in your comments, that can still be done with imported variables. Let's for example consider 2 files, main.py and imported.py.
imported.py might look like this:
some_global_var = 1
other_var = 2
def add():
return some_global_var + other_var
Because imported.py has functions that use global variables (instead of arguments), there's no reason you can't change those variables once imported. To do that, main.py can look like this:
import imported
print(imported.add()) # 3 - because we didn't change anything yet
imported.some_global_var = 10
imported.other_var = 20
print(imported.add()) # 30 - because we redefined the imported variables that our imported function uses

How can I load Python lambda expressions from YAML files using ruamel.yaml?

I'm trying to serialize and deserialize objects that contain lambda expressions using ruamel.yaml. As shown in the example, this yields a ConstructorError. How can this be done?
import sys
import ruamel.yaml
yaml = ruamel.yaml.YAML(typ='unsafe')
yaml.allow_unicode = True
yaml.default_flow_style = False
foo = lambda x: x * 2
yaml.dump({'foo': foo}, sys.stdout)
# foo: !!python/name:__main__.%3Clambda%3E
yaml.load('foo: !!python/name:__main__.%3Clambda%3E')
# ConstructorError: while constructing a Python object
# cannot find '<lambda>' in the module '__main__'
# in "<unicode string>", line 1, column 6
That is not going to work. ruamel.yaml dumps functions (or methods) by making references to the those functions in the source code by referring to their names (i.e. it doesn't try to store the actual code).
Your lambda is an anonymous function, so there is no name that can be properly retrieved. In the same way Python's pickle doesn't support lambda.
I am not sure if it should be an error to try and dump lambda, or that a warning should be in place.
The simple solutions is to make your lambda(s) into named functions. Alternatively you might be able to get to the actual code or AST for the lambda and store and retrieve that, but that is going to be more work and might not be portable, depending on what you store.

How do I implement a global "oracle" in python? [duplicate]

I've run into a bit of a wall importing modules in a Python script. I'll do my best to describe the error, why I run into it, and why I'm tying this particular approach to solve my problem (which I will describe in a second):
Let's suppose I have a module in which I've defined some utility functions/classes, which refer to entities defined in the namespace into which this auxiliary module will be imported (let "a" be such an entity):
module1:
def f():
print a
And then I have the main program, where "a" is defined, into which I want to import those utilities:
import module1
a=3
module1.f()
Executing the program will trigger the following error:
Traceback (most recent call last):
File "Z:\Python\main.py", line 10, in <module>
module1.f()
File "Z:\Python\module1.py", line 3, in f
print a
NameError: global name 'a' is not defined
Similar questions have been asked in the past (two days ago, d'uh) and several solutions have been suggested, however I don't really think these fit my requirements. Here's my particular context:
I'm trying to make a Python program which connects to a MySQL database server and displays/modifies data with a GUI. For cleanliness sake, I've defined the bunch of auxiliary/utility MySQL-related functions in a separate file. However they all have a common variable, which I had originally defined inside the utilities module, and which is the cursor object from MySQLdb module.
I later realised that the cursor object (which is used to communicate with the db server) should be defined in the main module, so that both the main module and anything that is imported into it can access that object.
End result would be something like this:
utilities_module.py:
def utility_1(args):
code which references a variable named "cur"
def utility_n(args):
etcetera
And my main module:
program.py:
import MySQLdb, Tkinter
db=MySQLdb.connect(#blahblah) ; cur=db.cursor() #cur is defined!
from utilities_module import *
And then, as soon as I try to call any of the utilities functions, it triggers the aforementioned "global name not defined" error.
A particular suggestion was to have a "from program import cur" statement in the utilities file, such as this:
utilities_module.py:
from program import cur
#rest of function definitions
program.py:
import Tkinter, MySQLdb
db=MySQLdb.connect(#blahblah) ; cur=db.cursor() #cur is defined!
from utilities_module import *
But that's cyclic import or something like that and, bottom line, it crashes too. So my question is:
How in hell can I make the "cur" object, defined in the main module, visible to those auxiliary functions which are imported into it?
Thanks for your time and my deepest apologies if the solution has been posted elsewhere. I just can't find the answer myself and I've got no more tricks in my book.
Globals in Python are global to a module, not across all modules. (Many people are confused by this, because in, say, C, a global is the same across all implementation files unless you explicitly make it static.)
There are different ways to solve this, depending on your actual use case.
Before even going down this path, ask yourself whether this really needs to be global. Maybe you really want a class, with f as an instance method, rather than just a free function? Then you could do something like this:
import module1
thingy1 = module1.Thingy(a=3)
thingy1.f()
If you really do want a global, but it's just there to be used by module1, set it in that module.
import module1
module1.a=3
module1.f()
On the other hand, if a is shared by a whole lot of modules, put it somewhere else, and have everyone import it:
import shared_stuff
import module1
shared_stuff.a = 3
module1.f()
… and, in module1.py:
import shared_stuff
def f():
print shared_stuff.a
Don't use a from import unless the variable is intended to be a constant. from shared_stuff import a would create a new a variable initialized to whatever shared_stuff.a referred to at the time of the import, and this new a variable would not be affected by assignments to shared_stuff.a.
Or, in the rare case that you really do need it to be truly global everywhere, like a builtin, add it to the builtin module. The exact details differ between Python 2.x and 3.x. In 3.x, it works like this:
import builtins
import module1
builtins.a = 3
module1.f()
As a workaround, you could consider setting environment variables in the outer layer, like this.
main.py:
import os
os.environ['MYVAL'] = str(myintvariable)
mymodule.py:
import os
myval = None
if 'MYVAL' in os.environ:
myval = os.environ['MYVAL']
As an extra precaution, handle the case when MYVAL is not defined inside the module.
This post is just an observation for Python behaviour I encountered. Maybe the advices you read above don't work for you if you made the same thing I did below.
Namely, I have a module which contains global/shared variables (as suggested above):
#sharedstuff.py
globaltimes_randomnode=[]
globalist_randomnode=[]
Then I had the main module which imports the shared stuff with:
import sharedstuff as shared
and some other modules that actually populated these arrays. These are called by the main module. When exiting these other modules I can clearly see that the arrays are populated. But when reading them back in the main module, they were empty. This was rather strange for me (well, I am new to Python). However, when I change the way I import the sharedstuff.py in the main module to:
from globals import *
it worked (the arrays were populated).
Just sayin'
A function uses the globals of the module it's defined in. Instead of setting a = 3, for example, you should be setting module1.a = 3. So, if you want cur available as a global in utilities_module, set utilities_module.cur.
A better solution: don't use globals. Pass the variables you need into the functions that need it, or create a class to bundle all the data together, and pass it when initializing the instance.
The easiest solution to this particular problem would have been to add another function within the module that would have stored the cursor in a variable global to the module. Then all the other functions could use it as well.
module1:
cursor = None
def setCursor(cur):
global cursor
cursor = cur
def method(some, args):
global cursor
do_stuff(cursor, some, args)
main program:
import module1
cursor = get_a_cursor()
module1.setCursor(cursor)
module1.method()
Since globals are module specific, you can add the following function to all imported modules, and then use it to:
Add singular variables (in dictionary format) as globals for those
Transfer your main module globals to it
.
addglobals = lambda x: globals().update(x)
Then all you need to pass on current globals is:
import module
module.addglobals(globals())
Since I haven't seen it in the answers above, I thought I would add my simple workaround, which is just to add a global_dict argument to the function requiring the calling module's globals, and then pass the dict into the function when calling; e.g:
# external_module
def imported_function(global_dict=None):
print(global_dict["a"])
# calling_module
a = 12
from external_module import imported_function
imported_function(global_dict=globals())
>>> 12
The OOP way of doing this would be to make your module a class instead of a set of unbound methods. Then you could use __init__ or a setter method to set the variables from the caller for use in the module methods.
Update
To test the theory, I created a module and put it on pypi. It all worked perfectly.
pip install superglobals
Short answer
This works fine in Python 2 or 3:
import inspect
def superglobals():
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals
save as superglobals.py and employ in another module thusly:
from superglobals import *
superglobals()['var'] = value
Extended Answer
You can add some extra functions to make things more attractive.
def superglobals():
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals
def getglobal(key, default=None):
"""
getglobal(key[, default]) -> value
Return the value for key if key is in the global dictionary, else default.
"""
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals.get(key, default)
def setglobal(key, value):
_globals = superglobals()
_globals[key] = value
def defaultglobal(key, value):
"""
defaultglobal(key, value)
Set the value of global variable `key` if it is not otherwise st
"""
_globals = superglobals()
if key not in _globals:
_globals[key] = value
Then use thusly:
from superglobals import *
setglobal('test', 123)
defaultglobal('test', 456)
assert(getglobal('test') == 123)
Justification
The "python purity league" answers that litter this question are perfectly correct, but in some environments (such as IDAPython) which is basically single threaded with a large globally instantiated API, it just doesn't matter as much.
It's still bad form and a bad practice to encourage, but sometimes it's just easier. Especially when the code you are writing isn't going to have a very long life.

Resources