How to ignore deprecation warnings in astropy 3.2? - python-3.x

I recently updated Astropy to the new 3.2(.1) version. Suddenly a lot of AstropyDeprecationWarnings started to appear.
I used to deal with them in this way:
from astropy.utils.exceptions import AstropyDeprecationWarning
import warnings
warnings.simplefilter('ignore', category = AstropyDeprecationWarning)
However, with this new Astropy version, the filter is basically ignored and the warnings keep showing.
It is not a vital problem, the code works of course, but I would like to get rid of them because they make the output more difficult to read. Is there a way to solve this problem? Am I doing something wrong?
Thanks!
Edit:
The two warnings that I see more often are:
WARNING: AstropyDeprecationWarning: astropy.extern.six will be removed in 4.0, use the six module directly if it is still needed [astropy.extern.six]
WARNING: AstropyDeprecationWarning: Composition of model classes will be removed in 4.0 (but composition of model instances is not affected) [astropy.modeling.core]
I think the second warning arises because of this line of code where I am combining a Gaussian model + a background:
model = models.Gaussian1D(amplitude = flux.max()*0.9, mean = 0., stddev = size) \
+ models.Const1D(amplitude = flux.min()*0.9)
I have no idea where the first warning comes from. I do not explicitly import astropy.extern.six (I actually do not know what it is) so it might be something related to the second warning or from a third party code.
Edit v2:
I investigated this a little bit more since the combination of models is not responsible for the warnings as I first thought. Apparently the astropy.extern.six warning arises from:
from astroquery.ned import Ned
While the composition of model classes warning arises from:
import photutils as ph

Related

AttributeError: Can't get attribute 'Vocab' on <module 'gensim.models.word2vec'

My problem is similar to this question. I tried both solutions posted on the question, but I still receive the error that the attribute "Vocab" is not available in the gensim.models.word2vec module.
The part of my code using this attribute is here
# if word in model.keys(): #use model.vocab for w2v model and model.keys() for Glove dicts
if word in self.w2v_model.wv.vocab:
vector = self.w2v_model.wv[word]
else:
vector = [0] * 100
pip install gensim==3.8.1 worked - the issue was with a particular Gensim package version.
Gensim 4.0.0, with many fixes & performance improvements, has also changed some property/method names for simplicity & long-term consistency. A project wiki page has a guide to adapting older code to match the new APIs:
https://github.com/RaRe-Technologies/gensim/wiki/Migrating-from-Gensim-3.x-to-4
But, your code doesn't need to use .vocab at all. The set of word-vectors, w2v_model.wv, can answer whether a key is in itself already. So the following code should work both pre-4.0 and in 4.0-and-above:
if word in self.w2v_model.wv:
vector = self.w2v_model.wv[word]
else:
vector = [0] * 100
(Separately, if you did choose to keep using an older Gensim to put-off any other code changes, it's be better to use 3.8.3, the last in the 3.x series, released May 2020, rather than an older/buggier 3.8.1, released in September 2019. But some key word2vec-related operations will be faster and use less memory in gensim-4.0.0 & higher, so rolling-back should be avoided if possible.)

What to do when a code review tool declares unmatched types?

I am working on developing a large-scale Python (backend) project. I was working with a firm that does extensive testing, and they built the frontend and test tools. Before every deploy, all the tools (like linters) are run regularly.
I had put down the code for a while, and now it fails many tests. Some of these are deprecation warnings for features or syntax soon to be deprecated, and they note they started classifying those as warnings (to later become errors) starting January 1, 2020, so I know they make dynamic changes in the tools themselves.
My problem is a bunch of code that used to pass no longer does. And the error is always the same: if I have a line that looks like so, I get an error that says something along the lines of "error: may not use operator '-' with incompatible types; a and b are of types numpy.array and NoneType":
x = a - b
This gets fixed by making the code super-messy with this sort of fix:
x = a.astype(float) - b.astype(float)
It's even worse because in the actual code there are 3 variables, all doing addition and subtraction with a 'c' that is an integer array kicking around along with the two numpy arrays. But then the code goes from:
x = a - b - c
to:
x = a.astype(float) - b.astype(float) - c.astype(float)
And this won't work since int's don't have an astype method. The error looks like this now:
File "/home/engine.py", line 165, in Foo
lower_array[t].astype(float)) / num_values.astype(float)
AttributeError: 'NoneType' object has no attribute 'astype'
Thus, I end up with:
x = a.astype(float) - b.astype(float) - float(c)
This is all extraordinarily cumbersome and nasty casting that is required, and makes the code impossible to read.
The odd thing to me is that all three arrays were instantiated as numpy arrays, i.e.,:
a=numpy.array(_a)
b=numpy.array(_b)
c=numpy.array(_c)
When I ask the code to put output to stdout the type of all three vars, they all say . Yet, the next line of code blows up and dumps, saying "Attribute error: 'NoneType' object has no attribute 'astype'"
I can't fathom how a static code analyzer determines the types - other than as numpy.ndarray type - since Python uses duck-typing. Thus, the type could change dynamically. But that's not the case here; all three vars are identified as numpy.ndarray type, but "z = a - b - c" fails.
Anyone understand what's going on here?
After much work, the answer is to ignore the linter. Readable code is the object, not code that satisfies a linter.

Convolution layer in keras

I am exploring convolution layer in keras from:
https://github.com/fchollet/keras/blob/master/keras/layers/convolutional.py#L233
everywhere i found following type of code lines:
#interfaces.legacy_conv1d_support
#interfaces.legacy_conv2d_support
what is working and role of these lines. I searched in google but not find answer anywhere. please explain.
These lines starting with # are called decorators in python. Check out this page to read a brief summary about them. The basic function of this decorators is, that they wrap the following function into another function which has some kind of "wrapper" functions, like preprocessing the arguments, changing the accessibility of the function etc.
Taking a look at the interfaces.py file you will see this:
legacy_conv1d_support = generate_legacy_interface(
allowed_positional_args=['filters', 'kernel_size'],
conversions=[('nb_filter', 'filters'),
('filter_length', 'kernel_size'),
('subsample_length', 'strides'),
('border_mode', 'padding'),
('init', 'kernel_initializer'),
('W_regularizer', 'kernel_regularizer'),
('b_regularizer', 'bias_regularizer'),
('W_constraint', 'kernel_constraint'),
('b_constraint', 'bias_constraint'),
('bias', 'use_bias')],
preprocessor=conv1d_args_preprocessor)
So, the use of this function is basicly to rename parameters. Why is this? The keras API changed the names of some arguments of some functions (like W_regularizer -> kernel_regularizer). To allow users to be able to run old code, they added this decorator, which will just replace the names of old arguments with the corresponding new parameter name before calling the real function. This allows you to run "old" keras 1 code, even though you have installed keras 2.
Tl;dr: These lines are just used to for compatibility reasons. As this are just internal aspects of keras there is nothing you have to worry about or to take care of.

update RHS on a constraint in scip using python

Are there any good solutions for updating the rhs of a constraint? Preferably I would like to do something like:
import pyscipopt as scp
Mod=scp.Model()
x=Mod.addVar(ub=3,name="x")
y=Mod.addVar(ub=4,name="y")
c=Mod.addCons(x+y<=2,"C1")
Mod.setObjective(0.5*x+0.3*y, "maximize")
Mod.optimize()
print(Mod.getObjVal())
c.updateRHS(4) # This function does not exist..
Mod.optimize()
print(Mod.getObjVal())
This is fixed in the latest version of PySCIPOpt (see https://github.com/SCIP-Interfaces/PySCIPOpt/pull/70)
The methods are called chgLhs() and chgRhs(). Keep in mind that they are only going to work for linear and quadratic constraints for now.

util/Natural unexpected behavior in alloy

I tried the following snippet of Alloy4, and found myself confused by the behavior of the util/Natural module. The comments explain more in detail what was unexpected. I was hoping someone could explain why this happens.
module weirdNatural
private open util/natural as nat
//Somehow, using number two obtained from incrementing one works as I expect, (ie, there is no
//number greater than it in {0,1,2}. but using number two obtained from the natrual/add function
//seems to work differently. why is that?
let twoViaAdd = nat/add[nat/One, nat/One]
let twoViaInc = nat/inc[nat/One]
pred biggerAdd {
some x: nat/Natural | nat/gt[x, twoViaAdd]
}
pred biggerInc {
some y: nat/Natural | nat/gt[y, twoViaInc]
}
//run biggerAdd for 10 but 3 Natural //does not work well, it does find a number gt2 in {0,1,2}
run biggerInc for 10 but 3 Natural //works as expected, it finds a number gt2 in {0,1,2,3}, but not in {0,1,2}
Thanks for this bug report. You are absolutely right, that is a weird behavior.
Alloy4.2 introduced some changes to how integers are handled; namely, the + and - operators in Alloy4.2 are always interpreted as set union/difference, so built-in functions plus/minus have to be used to express arithmetic addition/subtraction. On the other side, the util/natural module (mistakenly) hasn't been updated to use the latest syntax, which was the root cause of the weird behavior you experienced (specifically, the nat/add function uses the old + operator instead of plus, whereas nat/inc doesn't).
To work around this issue, you can either
open the util/natural module (choose "File -> Open Sample Models" from the main menu)
edit the file and replace the two occurrences of <a> + <b> with plus[<a>, <b>]
save the new file in the same folder with your model als file (e.g., as "my_nat.als")
open it from your main module (e.g., open my_nat as nat)
or
download the latest unofficial version where this bug is fixed (you might need to manually delete the Alloy temp folder to make sure that the Alloy Analyzer is not using the old (cached) version of the util/natural library).

Resources