Here is the screenshot of my Xcode Playground:
As you can see, the str is printed as some("Hello"). This really confuses me as there seems to be no documentation on it.
Dose anyone have a good explanation for this some()?
System info:
swift -version: 4.1.2
Xcode: 9.4.1
This appears to be a quirk in print for this compiler, purely conjecture it may be an artefact of work on changing the semantics of implicitly unwrapped optionals, see Abolish ImplicitlyUnwrappedOptional type.
The type Optional is, stripping to the basics, defined as:
enum Optional<Wrapped>
{
case none
case some(Wrapped)
}
Normally if you print() an enum you do get the literals, here none and some(), however print() normally prints optionals as nil and Optional().
It seems in Xcode 9.4.1 (at least) implicitly unwrapped optionals are being printed as optionals but without the special casing, whereas Xcode 9.2 (at least) prints the unwrapped value as would be expected (because it is implicitly unwrapped).
Maybe there is other interesting behaviour for implicitly unwrapped optionals in 9.4.1. You should test in Xcode 10 Beta and/or report a bug (bugreport.apple.com) in 9.4.1 and see what Apple say.
Related
I was recently checking about type hinting and after reading some theory I tried an simple example as below
def myfun(num1: int, num2: int) -> int:
return str(num1) + num2
a = myfun(1,'abc')
print(a)
# output -> 1abc
Here you can see that I have define num1 and num2 of type int and even after passing num2 value as string it is not generating any error.
Also the function should expect to return int type value but it's not complaining on returning string type value.
Can someone please explain what's going wrong here?
It's called type hinting for a reason: you're giving a hint about what a variable should be to the IDE or to anyone else reading your code. At runtime, the type hints don't actually mean anything - they exist for documentation and usability by other developers. If you go against the type hints, python won't stop you (it assumes you know what you're doing), but it's poor practice.
Note that this differs from statically-typed languages like Java, where trying to pass an argument that's incompatible with the function's definition will produce a compile-time error, and if you pass a differently-typed but compatible argument (e.g. passing a float to a function that expects an int) it will be automatically typecast.
Note that the code you've given will encounter a TypeError if the programmer uses it like they're supposed to, because int cannot be concatenated to a str. Your IDE or linter should be able to see this and give you a warning on that line, which it does based on the type hints. That's what the type hints are for - informing the behavior of the IDE and documentation, and providing a red flag that you might not be using a function in the intended way - not anything at runtime.
There seems to be some inconsistency with the way tryParse is used in Dart or I'm going about it a silly way, likely the latter.
When we use the int.tryParse statement, if we pass it 10.0 as a double, we would expect to get 10 which we do.
print(int.tryParse(10.0.toString())); ==> 10
If we pass it 10.0 as a string, it will return null.
print(int.tryParse('10.0')); ==> null
I find this a bit strange as I would have thought 10.0.toString() is equivalent to '10.0'.
Does anyone have an explanation?
The difference is between web compiler and native code.
When Dart is compiled to the web (to JavaScript), all numbers are JavaScript numbers, which effectively means doubles.
That means that the integer 10 and the double 10.0 is the same object. There is only on "10" value in JavaScript.
So, when you do toString on that value, it has to choose, and it chooses "10" over "10.0".
When run natively, the double 10.0 and the integer 10 are two different objects, and their toStrings are "10.0" and "10" respectively.
The int.tryParse function does not accept "10.0" as input, which is why it returns null.
So, when testing on the web (including dartpad.dev), int.tryParse(10.0.toString()) succeeds because the toString is "10", and when testing natively the same code gives null because the toString is "10.0".
10.0.toString() returns '10', not '10.0'. That's why you have different results. Here is just the same question. So, as I understand it, we have:
dart 10.0 parses as int on js target (and there is not such a big difference between int and double on js target, as said on the link);
dart 10.0 parses as double on dart VM, but toString returns the shortest possible text representation of that number, that is 10.
This seems to be a compiler behaviour. I can't find someone reporting this issue. Maybe when you have zeroes after the comma, the toString method will round it. If all zeroes after comma, then it will convert to int.
The following code works as expected, but the os.path.join produces a type error using pyright in VSCode, where shown.
# python 3.6.9
# pyright 1.1.25
# windows 10
# vscode 1.42.1
import os
import tempfile
with tempfile.TemporaryDirectory() as tmpfolder:
name = "hello.txt"
path = os.path.join(tmpfolder, name)
# No overloads for 'os.path.join(tmpfolder, name)' match parameters
# Argument types: (TypeVar['AnyStr', str, bytes], Literal['hello.txt'])
print(path)
I think I understand the immediate cause of the problem, but contend it should not be happening. Given that, I have some questions:
Is this the idiomatic way to write this code?
Is the problem in tempfile, os, pyright, or me?
If I cannot upgrade Python, what is the best (i.e. least clunky) way to suppress the error?
This seems like a limitation of pyright.
In short, the tempfile.TemporaryDirectory class is typed to be generic with respect to AnyStr. However, your example code omits specifying the generic type, leaving it up to the type checker to infer something appropriate.
In this case, I think there are several reasonable things for a type checker to do:
Pick some default generic type based on the typevar, such as 'str' or 'Union[str, bytes]'. For example, mypy ends up picking 'str' by default, giving 'tmpfolder' a type of 'str'.
Pick some placeholder type like either 'Any', the dynamic type, or NoReturn (aka 'bottom' aka 'nothing'). Both types are a valid subtype of every type, so are guaranteed to be valid placeholders and not cause downstream errors. This is what pyre and pytype does -- they deduce 'tmpfolder' has type 'Any' and 'nothing' respectively.
Attempt to infer the correct type based on context. Some type checkers may attempt to do this, but I don't know of any that handles this particular case perfectly.
Report an error and ask the user to specify the desired generic type.
What pyright seems to do instead is to just "leak" the generic variable. There is perhaps a principled reason why pyright is deciding to do this that I'm overlooking, but IMO this seems like a bug.
To answer your other questions, your example program is idiomatic Python, and type checkers should ideally support it without modification.
Adding a # type: ignore comment to the line with the error is the PEP 484 sanctioned way of suppressing error messages. I'm not familiar enough with pyright to know if it has a different preferred way of suppressing errors.
This question represents a proposed bug for Globalize. The owners of that project ask that it first be published as a SO question, so here we go...
With the new version 1.2.1 (and 1.2.2) of Globalize we're noticing that number-parsing an empty string returns 0 (seemingly independent of culture). This behavior is different to the previous version 1.1.2, where it returned NaN. Reproduction:
var g = new Globalize("en-US");
g.numberParser()(''); // returns 0 in v1.2.1 and NaN in v1.1.2.
Intuition tells me that parsing an empty string should not return 0. Vanilla JavaScript parse functions (e.g. parseInt) return NaN in such cases, supporting this intuition.
Furthermore, the relevant unit test in the Globalize project does not seem to cover this case, so it's unclear whether or not the changed behavior is intended. From a brief look at the changelog of the 1.2.* releases I can't seem to find any note of an intention to change this behavior.
Note that parsing whitespace in the new version does indeed return NaN:
var g = new Globalize("en-US");
g.numberParser()(' '); // returns NaN in both v1.2.1 and v1.1.2.
We're hoping that one of the project members will either confirm that this is a bug and raise a corresponding issue in the Globalize project, or explain why this is now expected behavior.
It's a bug, thanks for reporting, will be tracked in https://github.com/globalizejs/globalize/issues/682
Elsewhere I've seen it told that Swift's comparisons use NFD normalization.
However, running in the iSwift playground I've found that
print("\u{0071}\u{0307}\u{0323}" == "\u{0071}\u{0323}\u{0307}");
gives false, despite this being an example straight from the standard of "Canonical Equivalence", which Swift's documentation claims to follow.
So, what kind of canonicalization is performed by Swift, and is this a bug?
It seems that this was in bug in Swift that has since been fixed. With Swift 3 and Xcode 8.0,
print("\u{0071}\u{0307}\u{0323}" == "\u{0071}\u{0323}\u{0307}")
now prints true.