I was solving a question on CodeChef. A specific line to take input like:
10 232 4543
I was willing to store it in variables and then perform the calculation.
The following is the line of code I am using to achieve this.
val (d,l,r) = readLine()!!.split(" ").map{ it -> it.toInt()}
This line worked for previous question but is not working for the current question. I am inserting my code and the link to the question.
fun main(){
var t = readLine()!!.toInt()
for(i in 0 until t){
val (d,l,r) = readLine()!!.split(" ").map{ it -> it.toInt()}
if(d<l){
println("Too Early")
}
else if(d>r){
println("Too Late")
}
else{
println("Take second dose now")
}
}
}
This is the link to the question: https://www.codechef.com/LP1TO201/problems/VDATES
The following is the error I am receiving.
Exception in thread "main" java.lang.NumberFormatException: For input string: ""
at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:67)
at java.base/java.lang.Integer.parseInt(Integer.java:678)
at java.base/java.lang.Integer.parseInt(Integer.java:786)
at ProgKt.main(prog.kt:4)
at ProgKt.main(prog.kt)
An empty string is not a number. There are two easy ways to solve this:
Filter empty strings using isEmpty():
val (d,l,r) = readLine()!!.split(" ").filterNot { it.isEmpty() }.map { it -> it.toInt() }
Use toIntOrNull() and only take non-null elements:
val (d,l,r) = readLine()!!.split(" ").mapNotNull { it -> it.toIntOrNull() }
When I run the following Groovy snippet, it prints ",a,b,c" as expected:
#CompileStatic
public static void main(String[] args) {
def inList = ["a", "b", "c"]
def outList = inList.inject("", { a, b -> a + "," + b })
println(outList)
}
Now I change the first parameter in inject from an empty string to the number 0:
#CompileStatic
public static void main(String[] args) {
def inList = ["a", "b", "c"]
def outList = inList.inject(0, { a, b -> a + "," + b })
println(outList)
}
This won't work an produces an exception "Cannot cast object '0,a' with class 'java.lang.String' to class 'java.lang.Number'". Problem is that the compiler did not complain. I tried this in Scala and Kotlin (where inject is called fold) an the respective compiler complains about the mismatch as expected. Also the counterpart in Java8 does not compile (it says found int, required: java.lang.String):
List<String> list = Arrays.asList("a", "b", "c");
Object obj = list.stream().reduce(0, (x, y) -> x + y);
System.out.println(obj);
Question is now whether this can be fixed in Groovy or whether this is a a general problem because of static typing being introduced later into the language.
I think (I'm not 100% sure) that it is a bug in Groovy, very probably somewhere in type inference, and it can be fixed. Try to fill an issue in the bug tracker.
If you want to see compilation error, you can give types to closure parameters
inList.inject(0, { String a, String b -> a + "," + b })
Gives an error:
Expected parameter of type java.lang.Integer but got java.lang.String
# line 7, column 38.
def outList = inList.inject(0, { String a, String b -> a + "," + b})
^
The following example shows two checks that appear equivalent, yet the second finds a counterexample while the first does not. When setting 'prevent overflow' to 'No', both return the same result:
sig A {
x : Int,
y : Int
}
pred f1[a:A] {
y = x ++ (a -> ((a.x).add[1]))
}
pred f2[a:A] {
a.y = (a.x).add[1]
y = x ++ (a -> a.y)
}
check {
all a : A | {
f1[a] => f2[a]
f2[a] => f1[a]
}
}
check {
all a:A | {
f1[a] <=> f2[a]
}
}
I am using Alloy 4.2_2015-02_22 on Ubuntu Linux with sat4j.
I am sorry not to be able to offer a more useful answer than the following. With luck, someone else will do better.
It is possible, I think, that your model exhibits a bug in Alloy. But perhaps instead the model exhibits a counter-intuitive consequence of the semantics of integers in Alloy when the Prevent Overflow flag is set. Those semantics are described in the paper Preventing arithmetic overflows in Alloy by Aleksandar Milicevic and Daniel Jackson. You may have an easier time following the details than I am having.
The following expressions seem to suggest (a) that the unintuitive results have to do with the fact that the law of excluded middle does not hold when negation is applied to undefined values, and (b) that in the case where a.x = 7 (or whatever the maximum value of Int is, in a particular scope), the predicates f1 and f2 behave differently:
pred xm1a[ a : A] { (f1[a] or not(f1[a])) }
pred xm2a[ a : A] { (f2[a] or not(f2[a])) }
pred xm1b[ a : A] { (not (f1[a] and not(f1[a]))) }
pred xm2b[ a : A] { (not (f2[a] and not(f2[a]))) }
// some sensible assertions: no counterexamples
check x1 { all a : A | a.x = 7 implies xm1a[a] }
check x2 { all a : A | a.x = 7 implies xm2a[a] }
check x3 { all a : A | a.x = 7 implies xm1b[a] }
check x4 { all a : A | a.x = 7 implies xm2b[a] }
// some odd assertions: counterexamples for y2 and y4
check y1 { all a : A | a.x = 7 implies not xm1a[a] }
check y2 { all a : A | a.x = 7 implies not xm2a[a] }
check y3 { all a : A | a.x = 7 implies not xm1b[a] }
check y4 { all a : A | a.x = 7 implies not xm2b[a] }
It's not clear (to me) whether the relevant difference between f1 and f2 is that f2 refers explicitly to a.y, or that f2 has two clauses with an implicit and between them.
If the arithmetic relation between a.x and a.y is important to the model, then figuring out exactly how overflow cases are handled will be essential. If all the matters is that a.x != a.y and y = x ++ (a -> a.y), then weakening the condition will have the nice side effect that the reader need not understand Alloy's overflow semantics. (I expect you realize that already; I mention it for the benefit of later readers.)
This may be a duplicate but "as" is an INCREDABLY hard keyword to google, even S.O. ignores "as" as part of query.
So I'm wondering how to implement a class that supports "as" reflexively. For an example class:
class X {
private val
public X(def v) {
val=v
}
public asType(Class c) {
if (c == Integer.class)
return val as Integer
if(c == String.class)
return val as String
}
}
This allows something like:
new X(3) as String
to work, but doesn't help with:
3 as X
I probably have to attach/modify the "asType" on String and Integer somehow, but I feel any changes like this should be confined to the "X" class... Can the X class either implement a method like:
X fromObject(object)
or somehow modify the String/Integer class from within X. This seems tough since it won't execute any code in X until X is actually used... what if my first usage of X is "3 as X", will X get a chance to override Integer's asType before Groovy tries to call is?
As you say, it's not going to be easy to change the asType method for Integer to accept X as a new type of transformation (especially without destroying the existing functionality).
The best I can think of is to do:
Integer.metaClass.toX = { -> new X( delegate ) }
And then you can call:
3.toX()
I can't think how 3 as X could be done -- as you say, the other way; new X('3') as Integer is relatively easy.
Actually, you can do this:
// Get a handle on the old `asType` method for Integer
def oldAsType = Integer.metaClass.getMetaMethod( "asType", [Class] as Class[] )
// Then write our own
Integer.metaClass.asType = { Class c ->
if( c == X ) {
new X( delegate )
}
else {
// if it's not an X, call the original
oldAsType.invoke( delegate, c )
}
}
3 as X
This keeps the functionality out of the Integer type, and minimizes scope of the effect (which is good or bad depending on what you're looking for).
This category will apply asType from the Integer side.
class IntegerCategory {
static Object asType(Integer inty, Class c) {
if(c == X) return new X(inty)
else return inty.asType(c)
}
}
use (IntegerCategory) {
(3 as X) instanceof X
}
Does anyone know of a truly declarative language? The behavior I'm looking for is kind of what Excel does, where I can define variables and formulas, and have the formula's result change when the input changes (without having set the answer again myself)
The behavior I'm looking for is best shown with this pseudo code:
X = 10 // define and assign two variables
Y = 20;
Z = X + Y // declare a formula that uses these two variables
X = 50 // change one of the input variables
?Z // asking for Z should now give 70 (50 + 20)
I've tried this in a lot of languages like F#, python, matlab etc, but every time I tried this they come up with 30 instead of 70. Which is correct from an imperative point of view, but I'm looking for a more declarative behavior if you know what I mean.
And this is just a very simple calculation. When things get more difficult it should handle stuff like recursion and memoization automagically.
The code below would obviously work in C# but it's just so much code for the job, I'm looking for something a bit more to the point without all that 'technical noise'
class BlaBla{
public int X {get;set;} // this used to be even worse before 3.0
public int Y {get;set;}
public int Z {get{return X + Y;}}
}
static void main(){
BlaBla bla = new BlaBla();
bla.X = 10;
bla.Y = 20;
// can't define anything here
bla.X = 50; // bit pointless here but I'll do it anyway.
Console.Writeline(bla.Z);// 70, hurray!
}
This just seems like so much code, curly braces and semicolons that add nothing.
Is there a language/ application (apart from Excel) that does this? Maybe I'm no doing it right in the mentioned languages, or I've completely missed an app that does just this.
I prototyped a language/ application that does this (along with some other stuff) and am thinking of productizing it. I just can't believe it's not there yet. Don't want to waste my time.
Any Constraint Programming system will do that for you.
Examples of CP systems that have an associated language are ECLiPSe, SICSTUS Prolog / CP package, Comet, MiniZinc, ...
It looks like you just want to make Z store a function instead of a value. In C#:
var X = 10; // define and assign two variables
var Y = 20;
Func<int> Z = () => X + Y; // declare a formula that uses these two variables
Console.WriteLine(Z());
X = 50; // change one of the input variables
Console.WriteLine(Z());
So the equivalent of your ?-prefix syntax is a ()-suffix, but otherwise it's identical. A lambda is a "formula" in your terminology.
Behind the scenes, the C# compiler builds almost exactly what you presented in your C# conceptual example: it makes X into a field in a compiler-generated class, and allocates an instance of that class when the code block is entered. So congratulations, you have re-discovered lambdas! :)
In Mathematica, you can do this:
x = 10; (* # assign 30 to the variable x *)
y = 20; (* # assign 20 to the variable y *)
z := x + y; (* # assign the expression x+y to the variable z *)
Print[z];
(* # prints 30 *)
x = 50;
Print[z];
(* # prints 70 *)
The operator := (SetDelayed) is different from = (Set). The former binds an unevaluated expression to a variable, the latter binds an evaluated expression.
Wanting to have two definitions of X is inherently imperative. In a truly declarative language you have a single definition of a variable in a single scope. The behavior you want from Excel corresponds to editing the program.
Have you seen Resolver One? It's like Excel with a real programming language behind it.
Here is Daniel's example in Python, since I noticed you said you tried it in Python.
x = 10
y = 10
z = lambda: x + y
# Output: 20
print z()
x = 20
# Output: 30
print z()
Two things you can look at are the cells lisp library, and the Modelica dynamic modelling language, both of which have relation/equation capabilities.
There is a Lisp library with this sort of behaviour:
http://common-lisp.net/project/cells/
JavaFX will do that for you if you use bind instead of = for Z
react is an OCaml frp library. Contrary to naive emulations with closures it will recalculate values only when needed
Objective Caml version 3.11.2
# #use "topfind";;
# #require "react";;
# open React;;
# let (x,setx) = S.create 10;;
val x : int React.signal = <abstr>
val setx : int -> unit = <fun>
# let (y,sety) = S.create 20;;
val y : int React.signal = <abstr>
val sety : int -> unit = <fun>
# let z = S.Int.(+) x y;;
val z : int React.signal = <abstr>
# S.value z;;
- : int = 30
# setx 50;;
- : unit = ()
# S.value z;;
- : int = 70
You can do this in Tcl, somewhat. In tcl you can set a trace on a variable such that whenever it is accessed a procedure can be invoked. That procedure can recalculate the value on the fly.
Following is a working example that does more or less what you ask:
proc main {} {
set x 10
set y 20
define z {$x + $y}
puts "z (x=$x): $z"
set x 50
puts "z (x=$x): $z"
}
proc define {name formula} {
global cache
set cache($name) $formula
uplevel trace add variable $name read compute
}
proc compute {name _ op} {
global cache
upvar $name var
if {[info exists cache($name)]} {
set expr $cache($name)
} else {
set expr $var
}
set var [uplevel expr $expr]
}
main
Groovy and the magic of closures.
def (x, y) = [ 10, 20 ]
def z = { x + y }
assert 30 == z()
x = 50
assert 70 == z()
def f = { n -> n + 1 } // define another closure
def g = { x + f(x) } // ref that closure in another
assert 101 == g() // x=50, x + (x + 1)
f = { n -> n + 5 } // redefine f()
assert 105 == g() // x=50, x + (x + 5)
It's possible to add automagic memoization to functions too but it's a lot more complex than just one or two lines. http://blog.dinkla.net/?p=10
In F#, a little verbosily:
let x = ref 10
let y = ref 20
let z () = !x + !y
z();;
y <- 40
z();;
You can mimic it in Ruby:
x = 10
y = 20
z = lambda { x + y }
z.call # => 30
z = 50
z.call # => 70
Not quite the same as what you want, but pretty close.
not sure how well metapost (1) would work for your application, but it is declarative.
Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio
x = 10
y = 20
z = function() return x + y; end
x = 50
= z()
70
It's not what you're looking for, but Hardware Description Languages are, by definition, "declarative".
This F# code should do the trick. You can use lazy evaluation (System.Lazy object) to ensure your expression will be evaluated when actually needed, not sooner.
let mutable x = 10;
let y = 20;
let z = lazy (x + y);
x <- 30;
printf "%d" z.Value