i'm confused about languages category, can anyone please explain? - programming-languages

Which of the following statements is FALSE?
(A) In statically typed languages, each variable in a program has a fixed type
(B) In un-typed languages, values do not have any types
(C) In dynamically typed languages, variables have no types
(D) In all statically typed languages, each variable in a program is associated with values of only a single type during the execution of the program
Can you please explain the theory as well?

C) (In dynamically typed languages, variables have no types) Is false.
The variable has a type, however it is simply not stated or decided until run time. This implies there is no type checking prior to running the program.
a useful link describing Types and what it means:
http://en.wikipedia.org/wiki/Type_system
If you have ever done much with PHP you will notice that when you declare a varialbe, you do not have to say whether it is an INT or a STRING. However, sometimes you know that you will be receiving a string, but need an int, so you can still type cast variables at runtime, even though when you declared the variable you did not explicitly state the variable would hold an int.
<?php
#some more code here.....
# over here $myValue could be of some different type, but it can dynamically change to another type
$myValue = '5'; #storing a string...so $myValue is currently of type String
$myNewValue = (int)$myValue + 5 #type casted to integer, so in this case $myValue is currently of type integer
?>
If that doesn't help, maybe take a look at this.
myPythonVariable = "I am currently a string" #the variable is of type string
myPythonVariable = 5 #the variable is now of type integer
In the above code sample, myPythonVariable always has a type, whether or not that type changes doesn't matter.

Related

NamedTuple - сhecking types of fields at runtime

Is there a neat solution to raise an error if a value is passed to the NamedTuple field that does not match the declared type?
In this example, I intentionally passed page_count str instead of int. And the script will work on passing the erroneous value forward.
(I understand that linter will draw your attention to the error, but I encountered this in a case where NamedTuple fields were filled in by a function getting values from config file).
I could check the type of each value with a condition, but it doesn't look really clean. Any ideas? Thanks.
from typing import NamedTuple
class ParserParams(NamedTuple):
api_url: str
page_count: int
timeout: float
parser_params = ParserParams(
api_url='some_url',
page_count='3',
timeout=10.0,
)
By design, Python is a dynamically typed language which means any value can be assigned to any variable. Typing is only supported as hints - the errors might be highlighted in your IDE, but they do not enforce anything.
This means that if you need type checking you have to implement it yourself. On the upside, this can probably be automated, i.e. implemented only once instead of separately for every field. However, NamedTuple does not provide such checking out of the box.

python type hinting not generating error for wrong type

I was recently checking about type hinting and after reading some theory I tried an simple example as below
def myfun(num1: int, num2: int) -> int:
return str(num1) + num2
a = myfun(1,'abc')
print(a)
# output -> 1abc
Here you can see that I have define num1 and num2 of type int and even after passing num2 value as string it is not generating any error.
Also the function should expect to return int type value but it's not complaining on returning string type value.
Can someone please explain what's going wrong here?
It's called type hinting for a reason: you're giving a hint about what a variable should be to the IDE or to anyone else reading your code. At runtime, the type hints don't actually mean anything - they exist for documentation and usability by other developers. If you go against the type hints, python won't stop you (it assumes you know what you're doing), but it's poor practice.
Note that this differs from statically-typed languages like Java, where trying to pass an argument that's incompatible with the function's definition will produce a compile-time error, and if you pass a differently-typed but compatible argument (e.g. passing a float to a function that expects an int) it will be automatically typecast.
Note that the code you've given will encounter a TypeError if the programmer uses it like they're supposed to, because int cannot be concatenated to a str. Your IDE or linter should be able to see this and give you a warning on that line, which it does based on the type hints. That's what the type hints are for - informing the behavior of the IDE and documentation, and providing a red flag that you might not be using a function in the intended way - not anything at runtime.

difference between variable definition in a Haskell source file and in GHCi?

In a Haskell source file, I can write
a = 1
and I had the impression that I have to write the same in GHCi as
let a = 1
, for a = 1 in GHCi gives a parse error on =.
Now, if I write
a = 1
a = 2
in a source file, I will get an error about Multiple declaration of a, but it is OK to write in GHCi:
let a = 1
let a = 2
Can someone help clarify the difference between the two styles?
Successive let "statements" in the interactive interpreter are really the equivalent of nested let expressions. They behave as if there is an implied in following the assignment, and the rest of the interpreter session comprises the body of the let. That is
>>> let a = 1
>>> let a = 1
>>> print a
is the same as
let a = 1 in
let a = 1 in
print a
There is a key difference in Haskell in having two definitions of the same name and identical scopes, and having two definitions of the same name in nested scopes. GHCi vs modules in a file isn't really related to the underlying concept here, but those situations do lead you to encounter problems if you're not familiar with it.
A let-expression (and a let-statement in a do block) creates a set of bindings with the same scope, not just a single binding. For example, as an expression:
let a = True
a = False
in a
Or with braces and semicolons (more convenient to paste into GHCi without turning on multi-line mode):
let { a = True; a = False} in a
This will fail, whether in a module or in GHCi. There cannot be a single variable a that is both True and False, and there can't be two separate variables named a in the same scope (or it would be impossible to know which one was being referred to by the source text a).
The variables in a single binding set are all defined "at once"; the order they're written in is not relevant at all. You can see this because it's possible to define mututally-recursive bindings that all refer to each other, and couldn't possibly be defined one-at-a-time in any order:
λ let a = True : b
| b = False : a
| in take 10 a
[True,False,True,False,True,False,True,False,True,False]
it :: [Bool]
Here I've defined an infinite list of alternating True and False, and used it to come up with a finite result.
A Haskell module is a single scope, containing all the definitions in the file. Exactly as in a let-expression with multiple bindings, all the definitions "happen at once"1; they're only in a particular order because writing them down in a file inevitably introduces an order. So in a module this:
a = True
a = False
gives you an error, as you've seen.
In a do-block you have let-statements rather than let-expressions.2 These don't have an in part since they just scope over the entire rest of the do-block.3 GHCi commands are very like entering statements in an IO do-block, so you have the same option there, and that's what you're using in your example.
However your example has two let-bindings, not one. So there are two separate variables named a defined in two separate scopes.
Haskell doesn't care (almost ever) about the written order of different definitions, but it does care about the "nesting order" of nested scopes; the rule is that when you refer to a variable a, you get the inner-most definition of a whose scope contains the reference.4
As an aside, hiding an outer-scope name by reusing a name in an inner scope is known as shadowing (we say the inner definition shadows the outer one). It's a useful general programming term to know, since the concept comes up in many languages.
So it's not that the rules about when you can define a name twice are different in GHCi vs a module, its just that the different context makes different things easier.
If you want to put a bunch of definitions in a module, the easy thing to do is make them all top-level definitions, which all have the same scope (the whole module) and so you get an error if you use the same name twice. You have to work a bit more to nest the definitions.
In GHCi you're entering commands one-at-a-time, and it's more work to use multi-line commands or braces-and-semicolon style, so the easy thing when you want to enter several definitions is to use several let statements, and so you end up shadowing earlier definitions if you reuse names.5 You have to more deliberately try to actually enter multiple names in the same scope.
1 Or more accurately the bindings "just are" without any notion of "the time at which they happen" at all.
2 Or rather: you have let-statements as well as let-expressions, since statements are mostly made up of expressions and a let-expression is always valid as an expression.
3 You can see this as a general rule that later statements in a do-block are conceptually nested inside all earlier statements, since that's what they mean when you translate them to monadic operations; indeed let-statements are actually translated to let-expressions with the rest of the do-block inside the in part.
4 It's not ambiguous like two variables with the same name in the same scope would be, though it is impossible to refer to any further-out definitions.
5 And note that anything you've previously defined referring to the name before the shadowing will still behave exactly as it did before, referring to the previous name. This includes functions that return the value of the variable. It's easiest to understand shadowing as introducing a different variable that happens to have the same name as an earlier one, rather than trying to understand it as actually changing what the earlier variable name refers to.

How to convert a unit datatype to a string in SML

Basically I want to print something in the unit data type via my own structure and signature that requires this to happen as its data type is unit and i want to "show" it.
So I need to "print" it.
I tried the unit.to String function, and tried to convert it to a character first, but to no avail.
print(unit.to String(symex))
-undefined variable or constructor unit.to String
The unit type has only one value, (). That is, the value doesn't contain any information.
Creating a unit to string function is rather simple:
fun unitToString () = "()"
As there is only one possible value, it can have only one possible representation as a string.
However, since the value doesn't actually contain any information, you most likely don't want to operate on the unit value, but rather some other value.

Get argument names in String Interpolation in Scala 2.10

As of scala 2.10, the following interpolation is possible.
val name = "someName"
val interpolated = s"Hello world, my name is $name"
Now it is also possible defining custom string interpolations, as you can see in the scala documentation in the "Advanced usage" section here http://docs.scala-lang.org/overviews/core/string-interpolation.html#advanced_usage
Now then, my question is... is there a way to obtain the original string, before interpolation, including any interpolated variable names, from inside the implicit class that is defining the new interpolation for strings?
In other words, i want to be able to define an interpolation x, in such a way that when i call
x"My interpolated string has a $name"
i can obtain the string exactly as seen above, without replacing the $name part, inside the interpolation.
Edit: on a quick note, the reason i want to do this is because i want to obtain the original string and replace it with another string, an internationalized string, and then replace the variable values. This is the main reason i want to get the original string with no interpolation performed on it.
Thanks in advance.
Since Scala's string interpolation can handle arbitrary expressions within ${} it has to evaluate the arguments before passing them to the formatting function. Thus, direct access to the variable names is not possible by design. As pointed out by Eugene, it is possible to get the name of a plain variable by using macros. I don't think this is a very scalable solution, though. After all, you'll lose the possibility to evaluate arbitrary expressions. What, for instance, will happen in this case:
x"My interpolated string has a ${"Mr. " + name}"
You might be able to extract the variable name by using macros but it might get complicated for arbitrary expressions. My suggestions would be: If the name of your variable should be meaningful within the string interpolation, make it a part of the data structure. For example, you can do the following:
case class NamedValue(variableName: String, value: Any)
val name = NamedValue("name", "Some Name")
x"My interpolated string has a $name"
The objects are passed as Any* to the x. Thus, you now can match for NamedValue within x and you can do specific things depending on the "variable name", which now is part of your data structure. Instead of storing the variable name explicitly you could also exploit a type hierarchy, for instance:
sealed trait InterpolationType
case class InterpolationTypeName(name: String) extends InterpolationType
case class InterpolationTypeDate(date: String) extends InterpolationType
val name = InterpolationTypeName("Someone")
val date = InterpolationTypeDate("2013-02-13")
x"$name is born on $date"
Again, within x you can match for the InterpolationType subtype and handle things according to the type.
It seems that's not possible. String interpolation seems like a compile feature that compiles the example to:
StringContext("My interpolated string has a ").x(name)
As you can see the $name part is already gone. It became really clear for me when I looked at the source code of StringContext: https://github.com/scala/scala/blob/v2.10.0/src/library/scala/StringContext.scala#L1
If you define x as a macro, then you will be able to see the tree of the desugaring produced by the compiler (as shown by #EECOLOR). In that tree, the "name" argument will be seen as Ident(newTermName("name")), so you'll be able to extract a name from there. Be sure to take a look at macro and reflection guides at docs.scala-lang.org to learn how to write macros and work with trees.

Resources