Nim varargs in template? - nim-lang

How do you use varargs in a template (I know that it's possible to do that with a macro, but I am wondering if it's also possible with a template too)? The example below doesn't compile
template require*(modules: varargs[untyped]) =
for m in modules:
from m import nil
require options, strutils
But this is working
template require*(a) =
from a import nil
template require*(a, b) =
from a import nil
from b import nil
require options, strutils

varargs DOES work in templates* but your use case requires a macro, a template is just code substitution and the following code is not valid nim code:
for m in [options, strutils]:
from m import nil
*example of varargs usage in template:
template require(modules: varargs[untyped]) =
import modules
require options, strutils

Related

Retrieving the source code dependencies of a python 3 function

Using the AST in python 3, how do you build a directory or list of code dependencies of a given function?
Consider the following code, where my_clever_function has the desired behaviour:
////// myfile2.py
import numpy as np
a = 1
a += 1
def my_other_function():
def f():
return a
return np.random.randint() + f()
////// myfile1.py
import numpy as np
from . myfile2 import my_other_function
def external(a, b):
return np.sqrt(a * b) + my_other_function
class A:
def afunc(self, a, b):
v = external(a, b)
return v
>>> my_clever_function(A.afunc)
[myfile1.A.afunc, myfile1.external, myfile2.my_other_function, myfile2.a]
with the following structure:
project/
myfile1.py
myfile2.py
I want to retrieve the dependencies of the method afunc as a list.
I'm assuming that there is no funny business about functions altering global variables.
external is a dependency because it is not defined inside A.afunc
np.sqrt is not a "dependency" (in this sense anyway) because it is not defined in my project
likewise for np.random.randint
my_other_function is a dependency because it is not defined inside A.afunc
f is not a dependency because it is inside my_other_function
f needs the global variable a.
My motivation is to see if there have been any code changes between two project versions (in git perhaps).
We could find the dependencies of function like above and store the source.
In the future, we find the dependencies again and see if the source code is different.
We only compare the bits that are required (barring any funny global variables messing inside functions).
It is possible to walk the AST with python's builtin module ast.
So my_clever_function could look like this:
import ast
import dill
class Analyzer(ast.NodeVisitor):
def __init__(self):
self.stats = {...}
...
def report(self):
pprint(self.stats)
def my_clever_function(f):
source = dill.source.getsource(f)
tree = ast.parse(source)
analyser = Analyser()
analyser.visit(tree)
But how do you walk from a given function outwards to its dependencies?
I can see how you can just list symbols (https://www.mattlayman.com/blog/2018/decipher-python-ast/) but how do only list those which depend on the start node?

How to get a Hydra config without using #hydra.main()

Let's say we have following setup (copied & shortened from the Hydra docs):
Configuration file: config.yaml
db:
driver: mysql
user: omry
pass: secret
Python file: my_app.py
import hydra
#hydra.main(config_path="config.yaml")
def my_app(cfg):
print(cfg.pretty())
if __name__ == "__main__":
my_app()
This works well when we can use a decorator on the function my_app. Now I would like (for small scripts and testing purposes, but that is not important) to get this cfg object outside of any function, just in a plain python script. From what I understand how decorators work, it should be possible to call
import hydra
cfg = hydra.main(config_path="config.yaml")(lambda x: x)()
print(cfg.pretty())
but then cfg is just None and not the desired configuration object. So it seems that the decorator does not pass on the returned values. Is there another way to get to that cfg ?
Use the Compose API:
from hydra import compose, initialize
from omegaconf import OmegaConf
initialize(config_path="conf", job_name="test_app")
cfg = compose(config_name="config", overrides=["db=mysql", "db.user=me"])
print(OmegaConf.to_yaml(cfg))
This will only compose the config and will not have side effects like changing the working directory or configuring the Python logging system.
None of the above solutions worked for me. They gave errors:
'builtin_function_or_method' object has no attribute 'code'
and
GlobalHydra is already initialized, call
Globalhydra.instance().clear() if you want to re-initialize
I dug further into hydra and realised I could just use OmegaConf to load the file directly. You don't get overrides but I'm not fussed about this.
import omegaconf
cfg = omegaconf.OmegaConf.load(path)
I found a rather ugly answer but it works - if anyone finds a more elegant solution please let us know!
We can use a closure or some mutable object. In this example we define a list outside and append the config object:
For hydra >= 1.0.0 you have to use config_name instead, see documentation.
import hydra
c = []
hydra.main(config_name="config.yaml")(lambda x:c.append(x))()
cfg = c[0]
print(cfg)
For older versions:
import hydra
c = []
hydra.main(config_path="config.yaml")(c.append)()
cfg = c[0]
print(cfg.pretty())
anther ugly answer, but author said this may be crush in next version
Blockquote
from omegaconf import DictConfig
from hydra.utils import instantiate
from hydra._internal.utils import _strict_mode_strategy, split_config_path, create_automatic_config_search_path
from hydra._internal.hydra import Hydra
from hydra.utils import get_class
class SomeThing:
...
def load_from_yaml(self, config_path, strict=True):
config_dir, config_file = split_config_path(config_path)
strict = _strict_mode_strategy(strict, config_file)
search_path = create_automatic_config_search_path(
config_file, None, config_dir
)
hydra = Hydra.create_main_hydra2(
task_name='sdfs', config_search_path=search_path, strict=strict
)
config = hydra.compose_config(config_file, [])
config.pop('hydra')
self.config = config
print(self.config.pretty())
This is my solution
from omegaconf import OmegaConf
class MakeObj(object):
""" dictionary to object.
Thanks to https://stackoverflow.com/questions/1305532/convert-nested-python-dict-to-object
Args:
object ([type]): [description]
"""
def __init__(self, d):
for a, b in d.items():
if isinstance(b, (list, tuple)):
setattr(self, a, [MakeObj(x) if isinstance(x, dict) else x for x in b])
else:
setattr(self, a, MakeObj(b) if isinstance(b, dict) else b)
def read_yaml(path):
x_dict = OmegaConf.load(path)
x_yamlstr = OmegaConf.to_yaml(x_dict)
x_obj = MakeObj(x_dict)
return x_yamlstr, x_dict, x_obj
x_yamlstr, x_dict, x_obj = read_yaml('config/train.yaml')
print(x_yamlstr)
print(x_dict)
print(x_obj)
print(dir(x_obj))

Retaining a variable created during module import in python

I am trying to populate a dictionary with functions along with the name of the function contained in another file of the form:
{'fn_a': function fn_a at 0x000002239BDCB510, 'fn_b': function fn_b at 0x000002239BDCB268}.
I'm currently attempting to do it with a decorator so when the file containing the functions (definitions.py) is imported the dictionary is populated as follows. The problem is that dictionary is cleared once the import is complete.
definitions.py:
from main import formatter
#formatter
def fn_a(arg):
return arg
#formatter
def fn_b(arg):
return arg
main.py:
available_functions = {}
def formatter(func):
# work out function name and write to func_name
func_name=str(func).split()[1]
available_functions[func_name] = func
return func
import definitions
How can I keep the dictionary populated with values after the module import is finished?
I was able to solve the problem using the FunctionType module to return the available functions from the imported module. It doesn't solve the problem within the conditions I specified above, but does work.
from types import FunctionType
available_functions = {}
def formatter(func):
# work out function name and write to func_name
#global available_functions
func_name=str(func).split()[1]
available_functions[func_name] = func
return func
import definitions
funcs=[getattr(definitions, a) for a in dir(definitions)
if isinstance(getattr(definitions, a), FunctionType)]
for i in funcs:
formatter(i)

The unbearable opaqueness of time.struct_time

Why do pylint and the intellisense features of IDEs have trouble recognizing instances of time.struct_time? The following code contains some trivial tests of existent/non-existent attributes of classes, named tuples and the named-tuple-like time.struct_time. Everything works as expected in pylint, IntelliJ and VSCode - the access to missing attributes is reported in each case except for time.struct_time - it generates no warnings or errors in any of these tools. Why can't they tell what it is and what its attributes are?
import time
from collections import namedtuple
t = time.localtime()
e = t.tm_mday
e = t.bad # this is not reported by linters or IDEs.
class Clz:
cvar = 'whee'
def __init__(self):
self.ivar = 'whaa'
o = Clz()
e = Clz.cvar
e = o.ivar
e = Clz.bad
e = o.bad
Ntup = namedtuple('Ntup', 'thing')
n = Ntup(thing=3)
e = n.thing
e = n.bad
The context of the question is the following recent bug in pipenv -
# Halloween easter-egg.
if ((now.tm_mon == 10) and (now.tm_day == 30))
Obviously, the pass path was never tested but it seems the typical static analysis tools would not have helped here either. This is odd for a type from the standard library.
(Fix can be seen in full at https://github.com/kennethreitz/pipenv/commit/033b969d094ba2d80f8ae217c8c604bc40160b03)
time.struct_time is an object defined in C, which means it can't be introspected statically. The autocompletion software can parse Python code and make a reasonable guess as to what classes and namedtuples support, but they can't do this for C-defined objects.
The work-around most systems use is to generate stub files; usually by introspecting the object at runtime (importing the module and recording the attributes found). For example, CodeIntel (part of the Komodo IDE), uses an XML file format called CIX. However, this is a little more error-prone so such systems then err on the side of caution, and will not explicitly mark unknown attributes as wrong.
If you are coding in Python 3, you could look into using type hinting. For C extensions you still need stub files, but the community is pretty good at maintaining these now. The standard library stub files are maintained in a project called typeshed.
You'd have to add type hints to your project:
#!/usr/bin/env python3
import time
from collections import namedtuple
t: time.struct_time = time.localtime()
e: int = t.tm_mday
e = t.bad # this is not reported by linters or IDEs.
class Clz:
cvar: str = 'whee'
ivar: str
def __init__(self) -> None:
self.ivar = 'whaa'
o = Clz()
s = Clz.cvar
s = o.ivar
s = Clz.bad
s = o.bad
Ntup = namedtuple('Ntup', 'thing')
n = Ntup(thing=3)
e = n.thing
e = n.bad
but then the flake8 tool combined with the flake8-mypy plugin will detect the bad attributes:
$ flake8 test.py
test.py:8:5: T484 "struct_time" has no attribute "bad"
test.py:22:5: T484 "Clz" has no attribute "bad"
test.py:23:5: T484 "Clz" has no attribute "bad"
test.py:28:5: T484 "Ntup" has no attribute "bad"
PyCharm builds on this work too, and perhaps can detect the same invalid use. It certainly directly supports pyi files.

Haskell error Not in scope: data constructor

I wrote some simple module in Haskell and then import it in other file. Then I'm trying to use functions with data constructors from my module — there is an error Not in scope: data constructor: <value>. How can I fix it?
Note: when I'm using it in interpreter after importing — all is good without errors.
My module Test.hs:
module Test (test_f) where
data Test_Data = T|U|F deriving (Show, Eq)
test_f x
| x == T = T
| otherwise = F
And my file file.hs:
import Test
some_func = test_f
No error if I'm writing in interpreter:
> :l Test
> test_f T
T
In interpreter I'm trying to execute some_func T, but there is an error. And how can I use class Test_Data in my file to describe annotations?
You aren't exporting it from your module:
module Test (test_f, Test_Data(..)) where
The (..) part says "export all constructors for TestData".
You have an explicit export list in your module Test:
module Test (test_f) where
The export list (test_f) states that you want to export the function test_f and nothing else. In particular, the datatype Test_Data and its constructors are hidden.
To fix this, either remove the export list like this:
module Test where
Now all things will be exported.
Or add the datatype and its constructors to the export list like this:
module Test (test_f, Test_Data(..)) where
The notation Test_Data(..) exports a datatype with all corresponding constructors.

Resources