Z3py Context usage - multithreading

I don't understand how to use contexts in Z3/Z3py. The following example returns context mismatch but if I change the And into an == or Implies it works, and if I remove the context from the declaration of x I get z3.z3types.Z3Exception: Value cannot be converted into a Z3 Boolean value. Why? I know that I don't need context in this simple example but I need to parallelize a more complex script that uses Z3py.
from z3 import *
ctx=Context()
solver=Solver(ctx=ctx)
x=Bool('x',ctx=ctx)
solver.add(And(x,x))
print(solver.check())
Traceback (most recent call last):
File "minimal_test.py", line 6, in <module>
solver.add(And(x,x))
File "[...].local/lib/python3.7/site-packages/z3/z3.py", line 1727, in And
_z3_assert(ctx_args is None or ctx_args == ctx, "context mismatch")
File "[...].local/lib/python3.7/site-packages/z3/z3.py", line 96, in _z3_assert
raise Z3Exception(msg)
z3.z3types.Z3Exception: context mismatch

The program, as you gave in your question runs just fine for me, producing sat. So, I'm not sure why you're seeing what you're seeing, except perhaps you've an extremely old version of z3 that had a bug or something. (Github master is at 4.11.2; I'm using 4.9.2; what version do you have?)
If you remove the context from the declaration of x, you'll get "z3.z3types.Z3Exception: Value cannot be converted into a Z3 Boolean value" because the x was created in the global context but the solver is created in a particular context. In general, if you have multiple contexts then simply declare all your variables with that context. (By default, they'll all use a global context; which works fine when you don't have explicit declarations.) Functions like And/Or/Implies simply use the context of their arguments to figure out which context to create the expression in.
So, the program as you gave in your question is the correct way to use contexts. Not clear to me why it's failing for you; perhaps you had many versions of this program and somehow confused which one is which?

Related

Why doesn't PyCharm hint work with decorators?

PyCharm Version: 2019.1.2
Python Version: 3.7
I am trying to use least code to reproduce the problem. And this is the snippet of my code:
def sql_reader():
def outer(func):
def wrapped_function(*args, **kwargs):
func(*args, **kwargs)
return [{"a": 1, "b": 2}]
return wrapped_function
return outer
#sql_reader()
def function_read():
return "1"
result = function_read()
for x in result:
print(x['a'])
print(result)
Basically, what I am doing is to "decorate" some function to output different types. For example, in this snippet, the function being decorated is returning 1 which is int. However, in decorator, I change the behavior and return list of dict.
Functionally speaking, it works fine. But it looks like my IDE always complains about it which is annoying as below:
Is there anyway I can get rid of this warning message?
With all due respect, you are using an over 3 year old version of PyCharm. I struggle to see a reason for this. The community edition is free and requires no root privileges to unpack and run on Linux systems. You should seriously consider upgrading to the latest version.
Same goes for Python by the way. You can install any version (including the latest) via Pyenv without root privileges. Although the Python version requirement may be subject to external restrictions for the code you are working on, so that is just a suggestion. But for the IDE I see no reason to use such an outdated version.
Since I am not using your PyCharm version, I can not reproduce your problem. Version 2022.2.3 has no such issues with your code. Be that as it may, there are a few things you can do to make life easier for static type checkers (and by extension for yourself).
The first thing I would always suggest is to use functools.wraps, when you are wrapping functions via a decorator. This preserves a lot of useful metadata about the wrapped function and even stores a reference to it in the wrapper's __wrapped__ attribute.
The second is proper type annotations. This should go for any code you write, unless it really is just a quick-and-dirty draft script that you will only use once and then throw away. The benefits of proper annotations especially in conjunction with modern IDEs are huge. There are many resources out there explaining them, so I won't go into details here.
In this concrete case, proper type hints will remove ambiguity about the return types of your functions and should work with any IDE (bugs non withstanding). In my version of PyCharm the return type of your wrapper function is inferred to be Any because no annotations are present, which prevents any warning like yours to pop up, but also doesn't allow any useful auto-suggestions to be provided.
Here is what I would do with your code: (should be compatible with Python 3.7)
from functools import wraps
from typing import Any, Callable, Dict, List
AnyFuncT = Callable[..., Any]
ResultsT = List[Dict[str, int]]
def sql_reader() -> Callable[[AnyFuncT], Callable[..., ResultsT]]:
def outer(func: AnyFuncT) -> Callable[..., ResultsT]:
#wraps(func)
def wrapper(*args: Any, **kwargs: Any) -> ResultsT:
func(*args, **kwargs)
return [{"a": 1, "b": 2}]
return wrapper
return outer
#sql_reader()
def function_read() -> str:
return "1"
Adding reveal_type(function_read()) underneath and calling mypy on this file results in the following:
note: Revealed type is "builtins.list[builtins.dict[builtins.str, builtins.int]]"
Success: no issues found in 1 source file
As you can see, at least mypy now correctly infers the type returned by the wrapper function we put around function_read. Your IDE should also correctly infer the types involved, but as I said I cannot verify this with my version.
Moreover, now PyCharm will give you auto-suggestions for methods available on the types involved:
results = function_read()
first = results[0]
value = first["a"]
If I now start typing results., PyCharm will suggest things like append, extend etc. because it recognizes result as a list. If I type first., it will suggest keys, values etc. (inferring it as a dictionary) and if I type value. it will give options like imag, real and to_bytes, which are available for integers.
More information: typing module docs

Declaring variables inside a function python

For some reason I keep getting error in my code stating that my variables have not been declared. This only happens when I try to declare them in a function and not outside.
example
x, y = 105,107
print (x,y)
the above line of code works and gives me the output 105 107
but when I do this below
def fun1():
x, y = 105,107
print (x,y)
I get NameError: name 'x' is not defined
need help to understand what's happening.
One of the main utilities of functions is exactly the way they allow one
to isolate variables - no worries about clashing names for the code
in various functions - once a function works properly, it is done.
But if one needs to expose variables that populated inside functions to
the outside world, it is possible with the keyword "global". Notice that this
is in general considered bad practice, and even for small scripts,
there are often better solutions. But, common sense should always be the rule:
def fun1():
global x, y
x, y = 105, 107
fun1()
print(x, y)
Note that your code had another incorrect assumption: code
inside function bodies is not executed unless the function is called -
so, in the example in your question, even if you had declared
the variables as global, the print call would still
raise the same error, since you are not executing the line
that defines these variables by calling the function.
Now, you've learned about "globals" - next step is forget it
exists and learn how to work with variables properly encapsulated
inside functions, and when you get to some intermediate/advanced
level, then you will be able to judge when "globals" might actually
do more good than harm (which is almost never).

What to do when a code review tool declares unmatched types?

I am working on developing a large-scale Python (backend) project. I was working with a firm that does extensive testing, and they built the frontend and test tools. Before every deploy, all the tools (like linters) are run regularly.
I had put down the code for a while, and now it fails many tests. Some of these are deprecation warnings for features or syntax soon to be deprecated, and they note they started classifying those as warnings (to later become errors) starting January 1, 2020, so I know they make dynamic changes in the tools themselves.
My problem is a bunch of code that used to pass no longer does. And the error is always the same: if I have a line that looks like so, I get an error that says something along the lines of "error: may not use operator '-' with incompatible types; a and b are of types numpy.array and NoneType":
x = a - b
This gets fixed by making the code super-messy with this sort of fix:
x = a.astype(float) - b.astype(float)
It's even worse because in the actual code there are 3 variables, all doing addition and subtraction with a 'c' that is an integer array kicking around along with the two numpy arrays. But then the code goes from:
x = a - b - c
to:
x = a.astype(float) - b.astype(float) - c.astype(float)
And this won't work since int's don't have an astype method. The error looks like this now:
File "/home/engine.py", line 165, in Foo
lower_array[t].astype(float)) / num_values.astype(float)
AttributeError: 'NoneType' object has no attribute 'astype'
Thus, I end up with:
x = a.astype(float) - b.astype(float) - float(c)
This is all extraordinarily cumbersome and nasty casting that is required, and makes the code impossible to read.
The odd thing to me is that all three arrays were instantiated as numpy arrays, i.e.,:
a=numpy.array(_a)
b=numpy.array(_b)
c=numpy.array(_c)
When I ask the code to put output to stdout the type of all three vars, they all say . Yet, the next line of code blows up and dumps, saying "Attribute error: 'NoneType' object has no attribute 'astype'"
I can't fathom how a static code analyzer determines the types - other than as numpy.ndarray type - since Python uses duck-typing. Thus, the type could change dynamically. But that's not the case here; all three vars are identified as numpy.ndarray type, but "z = a - b - c" fails.
Anyone understand what's going on here?
After much work, the answer is to ignore the linter. Readable code is the object, not code that satisfies a linter.

Python3 - "ValueError: not enough values to unpack (expected 3, got 1)"

I'm very new to Python and programming overall, so if I seem to struggle to understand you, please bear with me.
I'm reading "Learn Python 3 the Hard Way", and I'm having trouble with exercise 23.
I copied the code to my text editor and ended up with this:
import sys
script, input_encoding, error = sys.argv
def main(language_file, encoding, errors):
line = language_file.readline()
if line:
print_line(line, encoding, errors)
return main(language_file, encoding, errors)
def print_line(line, encoding, errors):
next_lang = line.strip()
raw_bytes = next_lang.encode(encoding, errors=errors)
cooked_string = raw_bytes.decode(encoding, errors=errors)
print(raw_bytes, "<====>", cooked_string)
languages = open("languages.txt", encoding = "utf-8")
main(languages, input_encoding, error)
When I tried to run it I got the following error message:
Traceback (most recent call last):
File "pag78.py", line 3, in <module>
script, input_encoding, error = sys.argv
ValueError: not enough values to unpack (expected 3, got 1)
which I am having difficulties understanding in this context.
I googled the exercise, to compare it something other than the book page and, if I'm not missing something, I copied it correctly. For example, see this code here for the same exercise.
Obviously something is wrong with this code, and I'm not capable to identify what it is.
Any help would be greatly appreciated.
When you run the program, you have to enter your arguments into the command line. So run the program like this:
python ex23.py utf-8 strict
Copy and paste all of that into your terminal to run the code. This exercise uses argv like others do. It says this in the chapter, just a little bit later. I think you jumped the gun on running the code before you got to the explanation.
Let's record this in an answer for sake of posterity. In short, the immediate problem described lies not as much in the script itself, but rather in how it's being called. No positional argument was given, but two were expected to be assigned to input_encoding and error.
This line:
script, input_encoding, error = sys.argv
Takes (list) of arguments passed to the script. (sys.argv) and unpacks it, that is asigns its items' values to the variables on the left. This assumes number of variables to unpack to corresponds to items count in the list on the right.
sys.argv contains name of the script called and additional arguments passed to it one item each.
This construct is actually very simple way to ensure correct number of expected arguments is provided, even though as such the resulting error is perhaps not the most obvious.
Later on, you certainly should check out argparse for handling of passed arguments. It is comfortable and quite powerful.
I started reading LPTHW a couple of weeks ago. I got the same error as 'micaldras'. The error arises because you have probably clicked the file-link and opened an IEExplorer window. From there, (I guess), you have copied the text into a notepad file and saved. it.
I did that as well and got the same errors. I then downloaded the file directly from the indicated link (right click on the file and choose Save Target As). The saves the file literally as Zed intended and the program now runs.

Examples of open-source projects using Python 3 function annotations

Can anyone give me some examples of Python open-source projects using the function annotations introduced in Python 3?
I want to see some practical uses of this feature and see if I can use it my own project.
I have never seen this feature used in the wild. However, one potential use of function annotations that I explored in an article on Python 3 that I wrote for USENIX ;Login: was for enforcing contracts. For example, you could do this:
from functools import wraps
def positive(x):
'must be positive'
return x > 0
def negative(x):
'must be negative'
return x < 0
def ensure(func):
'Decorator that enforces contracts on function arguments (if present)'
return_check = func.__annotations__.get('return',None)
arg_checks = [(name,func.__annotations__.get(name))
for name in func.__code__.co_varnames]
#wraps(func)
def assert_call(*args):
for (name,check),value in zip(arg_checks,args):
if check:
assert check(value),"%s %s" % (name, check.__doc__)
result = func(*args)
if return_check:
assert return_check(result),"return %s" % (return_check.__doc__)
return result
return assert_call
# Example use
#ensure
def foo(a:positive, b:negative) -> positive:
return a-b
If you do this, you'll see behavior like this:
>>> foo(2,-3)
5
>>> foo(2,3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "ensure.py", line 22, in assert_call
assert check(value),"%s %s" % (name, check.__doc__)
AssertionError: b must be negative
I should note that the above example needs to be fleshed out more to work properly with default arguments, keyword arguments, and other details. It's only a sketch of an idea.
Now, whether or not this is a good idea or not, I just don't know. I'm inclined to agree with Brandon that the lack of composability is a problem--especially if annotations start to be used by different libraries for different purposes. I also wonder if something like this contract idea couldn't be accomplished through decorators instead. For example, making a decorator that was used like this (implementation left as an exercise):
#ensure(a=positive,b=negative)
def foo(a,b):
return a-b
A historial note, I've always kind of felt that function annotations were an outgrowth of discussions about "optional static typing" that the Python community had more than 10 years ago. Whether that was the original motivation or not, I just don't know.
I will play the curmudgeon, and recommend against using the feature. Hopefully it will someday be removed. Python has so far done a great job of deploying features that are attractive, orthogonal, and that can be stacked — function decorators are a great example: if I use three different libraries that all want me to decorate my function, the result looks rather clean:
#lru_cache(max_items=5)
#require_basic_auth
#view('index.html')
def index(…):
⋮
But this newfangled awkward “annotations” feature takes Python in the opposite direction: because you can only annotate a given function exactly once, it breaks completely the ability to compose solutions out of various pieces. If you had two libraries that each wanted you to annotate the same function on their behalf, then, from what I can see, you would be completely stymied.

Resources