Is it possible to change the directory where Origen checks for sub-flows? - origen-sdk

The Origen test program generation docs mention importing sub_flows. Is there a way to override where Origen searches for the sub-flow files? In the examples, just the name of the sub-flow is passed to the import method, as so:
Flow.create(environment: :probe) do
import "vreg"
end

Yes, this can be done in two ways.
1) Place the path in the flow file. The downside of this method is the repetitive nature of the solution.
# 1)
Flow.create(environment: :probe) do
import '../sub_flows/vreg'
end
2) Override the import method in your test interface. This method solves the issue for everyone at once and cleans up the flow files.
# 2)
# Flow file
Flow.create(environment: :probe) do
import 'vreg'
end
# Test interface file
def import(sub_flow, options = {})
sub_flow = "../sub_flows/#{sub_flow}"
super(sub_flow, options)
end

Related

How to dynamically embed Python code snippets to source file in Python3?

For example, I have a source file like this:
from math import *
# dynamically import `calculate` function
# from other files here
calculate = my_dynamic_import("xxx.py")
print(calculate(1))
And two code snippet which is readonly:
# file my_sqrt.py
def calculate(num):
return sqrt(num)
# file my_sin.py
def calculate(num):
return sin(num)
How can I implement my_dynamic_import function to enable the functionality to load and switch the calculation function from different code snippets dynamically and programatically?
If you want to import functions from another python file based on a string, the you could use something like exec(open('xxx.py').read()). Make sure that this file is safe though because this executes all code inside the file, which could be malicious if left unchecked.

How to persist variables across multiple module instances? [duplicate]

I'm bit confused about how the global variables work. I have a large project, with around 50 files, and I need to define global variables for all those files.
What I did was define them in my projects main.py file, as following:
# ../myproject/main.py
# Define global myList
global myList
myList = []
# Imports
import subfile
# Do something
subfile.stuff()
print(myList[0])
I'm trying to use myList in subfile.py, as following
# ../myproject/subfile.py
# Save "hey" into myList
def stuff():
globals()["myList"].append("hey")
An other way I tried, but didn't work either
# ../myproject/main.py
# Import globfile
import globfile
# Save myList into globfile
globfile.myList = []
# Import subfile
import subfile
# Do something
subfile.stuff()
print(globfile.myList[0])
And inside subfile.py I had this:
# ../myproject/subfile.py
# Import globfile
import globfile
# Save "hey" into myList
def stuff():
globfile.myList.append("hey")
But again, it didn't work. How should I implement this? I understand that it cannot work like that, when the two files don't really know each other (well subfile doesn't know main), but I can't think of how to do it, without using io writing or pickle, which I don't want to do.
The problem is you defined myList from main.py, but subfile.py needs to use it. Here is a clean way to solve this problem: move all globals to a file, I call this file settings.py. This file is responsible for defining globals and initializing them:
# settings.py
def init():
global myList
myList = []
Next, your subfile can import globals:
# subfile.py
import settings
def stuff():
settings.myList.append('hey')
Note that subfile does not call init()— that task belongs to main.py:
# main.py
import settings
import subfile
settings.init() # Call only once
subfile.stuff() # Do stuff with global var
print settings.myList[0] # Check the result
This way, you achieve your objective while avoid initializing global variables more than once.
See Python's document on sharing global variables across modules:
The canonical way to share information across modules within a single program is to create a special module (often called config or cfg).
config.py:
x = 0 # Default value of the 'x' configuration setting
Import the config module in all modules of your application; the module then becomes available as a global name.
main.py:
import config
print (config.x)
In general, don’t use from modulename import *. Doing so clutters the importer’s namespace, and makes it much harder for linters to detect undefined names.
You can think of Python global variables as "module" variables - and as such they are much more useful than the traditional "global variables" from C.
A global variable is actually defined in a module's __dict__ and can be accessed from outside that module as a module attribute.
So, in your example:
# ../myproject/main.py
# Define global myList
# global myList - there is no "global" declaration at module level. Just inside
# function and methods
myList = []
# Imports
import subfile
# Do something
subfile.stuff()
print(myList[0])
And:
# ../myproject/subfile.py
# Save "hey" into myList
def stuff():
# You have to make the module main available for the
# code here.
# Placing the import inside the function body will
# usually avoid import cycles -
# unless you happen to call this function from
# either main or subfile's body (i.e. not from inside a function or method)
import main
main.mylist.append("hey")
Using from your_file import * should fix your problems. It defines everything so that it is globally available (with the exception of local variables in the imports of course).
for example:
##test.py:
from pytest import *
print hello_world
and:
##pytest.py
hello_world="hello world!"
Hai Vu answer works great, just one comment:
In case you are using the global in other module and you want to set the global dynamically, pay attention to import the other modules after you set the global variables, for example:
# settings.py
def init(arg):
global myList
myList = []
mylist.append(arg)
# subfile.py
import settings
def print():
settings.myList[0]
# main.py
import settings
settings.init("1st") # global init before used in other imported modules
# Or else they will be undefined
import subfile
subfile.print() # global usage
Your 2nd attempt will work perfectly, and is actually a really good way to handle variable names that you want to have available globally. But you have a name error in the last line. Here is how it should be:
# ../myproject/main.py
# Import globfile
import globfile
# Save myList into globfile
globfile.myList = []
# Import subfile
import subfile
# Do something
subfile.stuff()
print(globfile.myList[0])
See the last line? myList is an attr of globfile, not subfile. This will work as you want.
Mike
I just came across this post and thought of posting my solution, just in case of anyone being in the same situation as me, where there are quite some files in the developed program, and you don't have the time to think through the whole import sequence of your modules (if you didn't think of that properly right from the start, such as I did).
In such cases, in the script where you initiate your global(s), simply code a class which says like:
class My_Globals:
def __init__(self):
self.global1 = "initial_value_1"
self.global2 = "initial_value_2"
...
and then use, instead of the line in the script where you initiated your globals, instead of
global1 = "initial_value_1"
use
globals = My_Globals()
I was then able to retrieve / change the values of any of these globals via
globals.desired_global
in any script, and these changes were automatically also applied to all the other scripts using them. All worked now, by using the exact same import statements which previously failed, due to the problems mentioned in this post / discussion here. I simply thought of global object's properties being changing dynamically without the need of considering / changing any import logic, in comparison to simple importing of global variables, and that definitely was the quickest and easiest (for later access) approach to solve this kind of problem for me.
Based on above answers and links within I created a new module called global_variables.py:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# ==============================================================================
#
# global_variables.py - Global variables shared by all modules.
#
# ==============================================================================
USER = None # User ID, Name, GUID varies by platform
def init():
""" This should only be called once by the main module
Child modules will inherit values. For example if they contain
import global_variables as g
Later on they can reference 'g.USER' to get the user ID.
"""
global USER
import getpass
USER = getpass.getuser()
# End of global_variables.py
Then in my main module I use this:
import global_variables as g
g.init()
In another child imported module I can use:
import global_variables as g
# hundreds of lines later....
print(g.USER)
I've only spent a few minutes testing in two different python multiple-module programs but so far it's working perfectly.
Namespace nightmares arise when you do from config import mySharedThing. That can't be stressed enough.
It's OK to use from in other places.
You can even have a config module that's totally empty.
# my_config.py
pass
# my_other_module.py
import my_config
def doSomething():
print(my_config.mySharedThing.message)
# main.py
from dataclasses import dataclass
from my_other_module import doSomething
import my_config
#dataclass
class Thing:
message: str
my_config.mySharedThing = Thing('Hey everybody!')
doSomething()
result:
$ python3 main.py
Hey everybody!
But using objects you pulled in with from will take you down a path of frustration.
# my_other_module.py
from my_config import mySharedThing
def doSomething():
print(mySharedThing.message)
result:
$ python3 main.py
ImportError: cannot import name 'mySharedThing' from 'my_config' (my_config.py)
And maybe you'll try to fix it like this:
# my_config.py
mySharedThing = None
result:
$ python3 main.py
AttributeError: 'NoneType' object has no attribute 'message'
And then maybe you'll find this page and try to solve it by adding an init() method.
But the whole problem is the from.

How do I implement a global "oracle" in python? [duplicate]

I've run into a bit of a wall importing modules in a Python script. I'll do my best to describe the error, why I run into it, and why I'm tying this particular approach to solve my problem (which I will describe in a second):
Let's suppose I have a module in which I've defined some utility functions/classes, which refer to entities defined in the namespace into which this auxiliary module will be imported (let "a" be such an entity):
module1:
def f():
print a
And then I have the main program, where "a" is defined, into which I want to import those utilities:
import module1
a=3
module1.f()
Executing the program will trigger the following error:
Traceback (most recent call last):
File "Z:\Python\main.py", line 10, in <module>
module1.f()
File "Z:\Python\module1.py", line 3, in f
print a
NameError: global name 'a' is not defined
Similar questions have been asked in the past (two days ago, d'uh) and several solutions have been suggested, however I don't really think these fit my requirements. Here's my particular context:
I'm trying to make a Python program which connects to a MySQL database server and displays/modifies data with a GUI. For cleanliness sake, I've defined the bunch of auxiliary/utility MySQL-related functions in a separate file. However they all have a common variable, which I had originally defined inside the utilities module, and which is the cursor object from MySQLdb module.
I later realised that the cursor object (which is used to communicate with the db server) should be defined in the main module, so that both the main module and anything that is imported into it can access that object.
End result would be something like this:
utilities_module.py:
def utility_1(args):
code which references a variable named "cur"
def utility_n(args):
etcetera
And my main module:
program.py:
import MySQLdb, Tkinter
db=MySQLdb.connect(#blahblah) ; cur=db.cursor() #cur is defined!
from utilities_module import *
And then, as soon as I try to call any of the utilities functions, it triggers the aforementioned "global name not defined" error.
A particular suggestion was to have a "from program import cur" statement in the utilities file, such as this:
utilities_module.py:
from program import cur
#rest of function definitions
program.py:
import Tkinter, MySQLdb
db=MySQLdb.connect(#blahblah) ; cur=db.cursor() #cur is defined!
from utilities_module import *
But that's cyclic import or something like that and, bottom line, it crashes too. So my question is:
How in hell can I make the "cur" object, defined in the main module, visible to those auxiliary functions which are imported into it?
Thanks for your time and my deepest apologies if the solution has been posted elsewhere. I just can't find the answer myself and I've got no more tricks in my book.
Globals in Python are global to a module, not across all modules. (Many people are confused by this, because in, say, C, a global is the same across all implementation files unless you explicitly make it static.)
There are different ways to solve this, depending on your actual use case.
Before even going down this path, ask yourself whether this really needs to be global. Maybe you really want a class, with f as an instance method, rather than just a free function? Then you could do something like this:
import module1
thingy1 = module1.Thingy(a=3)
thingy1.f()
If you really do want a global, but it's just there to be used by module1, set it in that module.
import module1
module1.a=3
module1.f()
On the other hand, if a is shared by a whole lot of modules, put it somewhere else, and have everyone import it:
import shared_stuff
import module1
shared_stuff.a = 3
module1.f()
… and, in module1.py:
import shared_stuff
def f():
print shared_stuff.a
Don't use a from import unless the variable is intended to be a constant. from shared_stuff import a would create a new a variable initialized to whatever shared_stuff.a referred to at the time of the import, and this new a variable would not be affected by assignments to shared_stuff.a.
Or, in the rare case that you really do need it to be truly global everywhere, like a builtin, add it to the builtin module. The exact details differ between Python 2.x and 3.x. In 3.x, it works like this:
import builtins
import module1
builtins.a = 3
module1.f()
As a workaround, you could consider setting environment variables in the outer layer, like this.
main.py:
import os
os.environ['MYVAL'] = str(myintvariable)
mymodule.py:
import os
myval = None
if 'MYVAL' in os.environ:
myval = os.environ['MYVAL']
As an extra precaution, handle the case when MYVAL is not defined inside the module.
This post is just an observation for Python behaviour I encountered. Maybe the advices you read above don't work for you if you made the same thing I did below.
Namely, I have a module which contains global/shared variables (as suggested above):
#sharedstuff.py
globaltimes_randomnode=[]
globalist_randomnode=[]
Then I had the main module which imports the shared stuff with:
import sharedstuff as shared
and some other modules that actually populated these arrays. These are called by the main module. When exiting these other modules I can clearly see that the arrays are populated. But when reading them back in the main module, they were empty. This was rather strange for me (well, I am new to Python). However, when I change the way I import the sharedstuff.py in the main module to:
from globals import *
it worked (the arrays were populated).
Just sayin'
A function uses the globals of the module it's defined in. Instead of setting a = 3, for example, you should be setting module1.a = 3. So, if you want cur available as a global in utilities_module, set utilities_module.cur.
A better solution: don't use globals. Pass the variables you need into the functions that need it, or create a class to bundle all the data together, and pass it when initializing the instance.
The easiest solution to this particular problem would have been to add another function within the module that would have stored the cursor in a variable global to the module. Then all the other functions could use it as well.
module1:
cursor = None
def setCursor(cur):
global cursor
cursor = cur
def method(some, args):
global cursor
do_stuff(cursor, some, args)
main program:
import module1
cursor = get_a_cursor()
module1.setCursor(cursor)
module1.method()
Since globals are module specific, you can add the following function to all imported modules, and then use it to:
Add singular variables (in dictionary format) as globals for those
Transfer your main module globals to it
.
addglobals = lambda x: globals().update(x)
Then all you need to pass on current globals is:
import module
module.addglobals(globals())
Since I haven't seen it in the answers above, I thought I would add my simple workaround, which is just to add a global_dict argument to the function requiring the calling module's globals, and then pass the dict into the function when calling; e.g:
# external_module
def imported_function(global_dict=None):
print(global_dict["a"])
# calling_module
a = 12
from external_module import imported_function
imported_function(global_dict=globals())
>>> 12
The OOP way of doing this would be to make your module a class instead of a set of unbound methods. Then you could use __init__ or a setter method to set the variables from the caller for use in the module methods.
Update
To test the theory, I created a module and put it on pypi. It all worked perfectly.
pip install superglobals
Short answer
This works fine in Python 2 or 3:
import inspect
def superglobals():
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals
save as superglobals.py and employ in another module thusly:
from superglobals import *
superglobals()['var'] = value
Extended Answer
You can add some extra functions to make things more attractive.
def superglobals():
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals
def getglobal(key, default=None):
"""
getglobal(key[, default]) -> value
Return the value for key if key is in the global dictionary, else default.
"""
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals.get(key, default)
def setglobal(key, value):
_globals = superglobals()
_globals[key] = value
def defaultglobal(key, value):
"""
defaultglobal(key, value)
Set the value of global variable `key` if it is not otherwise st
"""
_globals = superglobals()
if key not in _globals:
_globals[key] = value
Then use thusly:
from superglobals import *
setglobal('test', 123)
defaultglobal('test', 456)
assert(getglobal('test') == 123)
Justification
The "python purity league" answers that litter this question are perfectly correct, but in some environments (such as IDAPython) which is basically single threaded with a large globally instantiated API, it just doesn't matter as much.
It's still bad form and a bad practice to encourage, but sometimes it's just easier. Especially when the code you are writing isn't going to have a very long life.

restart python (or reload modules) in py.test tests

I have a (python3) package that has completely different behaviour depending on how it's init()ed (perhaps not the best design, but rewriting is not an option). The module can only be init()ed once, a second time gives an error. I want to test this package (both behaviours) using py.test.
Note: the nature of the package makes the two behaviours mutually exclusive, there is no possible reason to ever want both in a singular program.
I have serveral test_xxx.py modules in my test directory. Each module will init the package in the way in needs (using fixtures). Since py.test starts the python interpreter once, running all test-modules in one py.test run fails.
Monkey-patching the package to allow a second init() is not something I want to do, since there is internal caching etc that might result in unexplained behaviour.
Is it possible to tell py.test to run each test module in a separate python process (thereby not being influenced by inits in another test-module)
Is there a way to reliably reload a package (including all sub-dependencies, etc)?
Is there another solution (I'm thinking of importing and then unimporting the package in a fixture, but this seems excessive)?
To reload a module, try using the reload() from library importlib
Example:
from importlib import reload
import some_lib
#do something
reload(some_lib)
Also, launching each test in a new process is viable, but multiprocessed code is kind of painful to debug.
Example
import some_test
from multiprocessing import Manager, Process
#create new return value holder, in this case a list
manager = Manager()
return_value = manager.list()
#create new process
process = Process(target=some_test.some_function, args=(arg, return_value))
#execute process
process.start()
#finish and return process
process.join()
#you can now use your return value as if it were a normal list,
#as long as it was assigned in your subprocess
Delete all your module imports and also your tests import that also import your modules:
import sys
for key in list(sys.modules.keys()):
if key.startswith("your_package_name") or key.startswith("test"):
del sys.modules[key]
you can use this as a fixture by configuring on your conftest.py file a fixture using the #pytest.fixture decorator.
Once I had similar problem, quite bad design though..
#pytest.fixture()
def module_type1():
mod = importlib.import_module('example')
mod._init(10)
yield mod
del sys.modules['example']
#pytest.fixture()
def module_type2():
mod = importlib.import_module('example')
mod._init(20)
yield mod
del sys.modules['example']
def test1(module_type1)
pass
def test2(module_type2)
pass
The example/init.py had something like this
def _init(val):
if 'sample' in globals():
logger.info(f'example already imported, val{sample}' )
else:
globals()['sample'] = val
logger.info(f'importing example with val : {val}')
output:
importing example with val : 10
importing example with val : 20
No clue as to how complex your package is, but if its just global variables, then this probably helps.
I have the same problem, and found three solutions:
reload(some_lib)
patch SUT, as the imported method is a key and value in SUT, you can patch the
SUT. Example, if you use f2 of m2 in m1, you can patch m1.f2 instead of m2.f2
import module, and use module.function.

Writing a custom builder that executes external command and python function

I'm looking to write a custom SCons Builder that:
Executes an external command to produce foo.temp
Then executes a python function to manipulate foo.temp and produce the final output file
I've referred to the two following sections, but I'm not sure the correct way to "glue" them together.
18.1. Writing Builders That Execute External Commands
18.4. Builders That Execute Python Functions
I know that Command accepts a list of actions to take. But how do I properly handle that intermediate file? Ideally the intermediate file would be invisible to the user -- the entire Builder would appear to operate atomically.
Here's what I've come up with that seems to be working. However the .bin file isn't being deleted automatically.
from SCons.Action import Action
from SCons.Util import is_List
from SCons.Script import Delete
_objcopy_builder = Builder(
action = 'objcopy -O binary $SOURCE $TARGET',
suffix = '.bin',
single_source = 1
)
def _add_header(target, source, env):
source = str(source[0])
target = str(target[0])
with open(source, 'rb') as src:
with open(target, 'wn') as tgt:
tgt.write('MODULE\x00\x00')
tgt.write(src.read())
return 0
_addheader_builder = Builder(
action = _add_header,
single_source = 1
)
def Elf2Mod(env, target, source, *args, **kw):
def check_one(x, what):
if not is_List(x):
x = [x]
if len(x) != 1:
raise StopError('Only one {0} allowed'.format(what))
return x
target = check_one(target, 'target')
source = check_one(source, 'source')
# objcopy a binary file
binfile = _objcopy_builder.__call__(env, source=source, **kw)
# write the module header
_addheader_builder.__call__(env, target=target, source=binfile, **kw)
# delete the intermediate binary file
# TODO: Not working
Delete(binfile)
return target
def generate(env):
"""Add Builders and construction variables to the Environment."""
env.AddMethod(Elf2Mod, 'Elf2Mod')
print 'Added Elf2Mod to env {0}'.format(env)
def exists(env):
return True
This can indeed be done with the Command builder, by specifying a list of actions, as follows:
Command('foo.temp', 'foo.in',
['your_external_action',
your_python_function])
Notice that foo.in is the source, and you should name it accordingly. But if foo.temp is internal as you mention, then this approach probably isnt the best approach.
Another way, which I feel is much more flexible, would be to use Custom Builder with a Generator and/or Emitter.
The Generator is a Python function where you do the actual work, which in your case would be calling the external command, and also call the Python function.
An Emitter allows you to have a fine-tuned control over the sources and targets. I used a Builder with a Emitter (and Generator) once to do C++ and Java code-generation with Thrift input IDL files. I had to read and process the Thrift input file to know exactly what files would be code-generated (which are the actual targets), and the Emitter is the best/only way to do something like this. If your particular use-case isnt so complicated, you can skip the Emitter and just list your sources/targets in the call to the builder. But if you want foo.temp to be transparent to the end-user, then you'll need an Emitter.
When using a Custom Builder with a Generator and Emitter, the Emitter will be called every time by SCons to calculate the sources and dependencies to know if the Generator needs to be called. The Generator will only be called if one of the targets is considered older with respect to the sources.
There are numerous examples showing how to use a Generator and Emitter in a Custom Builder, so I wont list the code here, but let me know if you need help with the syntax, etc.

Resources