I was trying to do 20 + 10 in such a way as:
(10&+~) 20
Then I realize that adverb & has "short right-hand". So it should be
(10&(+~)) 20
which gives me the correct answer: 30. But just out of curiosity
(10&+~) 20
gives 220. Why?
More strangely,
(10&+~) 0.1
gives "domain error'
(10&+~) 20 is 10 (&+~) 20. This seems like a fork or a hook but it isn't because ~ and & are special snowflakes. ~ has to be dealt with first, so your expression is 10 (&+)~ 20. Now, &+ can not stand on its own, so ~ has to be reflexive here. Your expression now is
20 (10 (&+)) 20
which now leads to the special dyadic case of bond-& that becomes a power (^:): x m&v y ↔ m&v^:x y. So, finally, the expression becomes:
(10&+)^:20 ] 20
220
Obviously, you can't use power with non-integers, so (10&+~)0.1 is a domain error.
Related
It can in Math, but I came across this when I was looking into Rabin-Carp string searching algorithm. The hash function they used (source: wikipedia: https://en.wikipedia.org/wiki/Rabin%E2%80%93Karp_algorithm#Hash_function_used) was this:
[(104 × 256 ) % 101 + 105] % 101 = 65
How is this better than deleting the inner mod operator so you only have one on the outside? As so:
[104 × 256 + 105] % 101
As far as I can tell it should give the same result, and mods are generally expensive operations, so wouldn't it be better to have one?
The only thing I can think of is concerns about overflow, but if that were the case, the multiplication would be similarly split up, like so:
(104 % 101 × 256 % 101 ) % 101 + 105] % 101 = 65
When you implement a formula, in general you try to have the very same outlook. Let's suppose that the formula looks like this:
[(x × y ) % z + t] % w
In our case z and w have the very same value, but they could be different. If you simplify the formula to match your case, then in the future, if the differences between z and w start to creep in, then you will have trouble finding out what was meant by the code. Yet, if z and w are entangled and it is guaranteed that they will be entangled in the future as well, then you might consider this simplification. Yet, you also need to be careful while doing so, because if x and y are fairly large, then you might have some number overflow issues in some cases when adding t to it. Also, if t is very large, you might have number overflow problems.
As about your question,
[a % b + c] % b
is equivalent to
[a + c] % b
mathematically. But in the actual code there might be some nuances that justify the seemingly superfluousness of the code.
I have a homework assignment in which I have to write a program that outputs the change to be given by a vending machine using the lowest number of coins. E.g. £3.67 can be dispensed as 1x£2 + 1x£1 + 1x50p + 1x10p + 1x5p + 1x2p.
However, I'm not getting the right answers and suspect that this might be due to a rounding problem.
change=float(input("Input change"))
twocount=0
onecount=0
halfcount=0
pttwocount=0
ptonecount=0
while change!=0:
if change-2>=0:
change=change-2
twocount+=1
else:
if change-1>=0:
change=change-1
onecount+=1
else:
if change-0.5>=0:
change=change-0.5
halfcount+=1
else:
if change-0.2>=0:
change=change-0.2
pttwocount+=1
else:
if change-0.1>=0:
change=change-0.1
ptonecount+=1
else:
break
print(twocount,onecount,halfcount,pttwocount,ptonecount)
RESULTS:
Input: 2.3
Output: 10010
i.e. 2.2
Input: 3.4
Output: 11011
i.e. 3.3
Some actually work:
Input: 3.2
Output: 11010
i.e. 3.2
Input: 1.1
Output: 01001
i.e. 1.1
Floating point accuracy
Your approach is correct, but as you guessed, the rounding errors are causing trouble. This can be debugged by simply printing the change variable and information about which branch your code took on each iteration of the loop:
initial value: 3.4
taking a 2... new value: 1.4
taking a 1... new value: 0.3999999999999999 <-- uh oh
taking a 0.2... new value: 0.1999999999999999
taking a 0.1... new value: 0.0999999999999999
1 1 0 1 1
If you wish to keep floats for output and input, multiply by 100 on the way in (cast to integer with int(round(change))) and divide by 100 on the way out of your function, allowing you to operate on integers.
Additionally, without the 5p, 2p and 1p values, you'll be restricted in the precision you can handle, so don't forget to add those. Multiplying all of your code by 100 gives:
initial value: 340
taking a 200... new value: 140
taking a 100... new value: 40
taking a 20... new value: 20
taking a 20... new value: 0
1 1 0 2 0
Avoid deeply nested conditionals
Beyond the decimal issue, the nested conditionals make your logic very difficult to reason about. This is a common code smell; the more you can eliminate branching, the better. If you find yourself going beyond about 3 levels deep, stop and think about how to simplify.
Additionally, with a lot of branching and hand-typed code, it's very likely that a subtle bug or typo will go unnoticed or that a denomination will be left out.
Use data structures
Consider using dictionaries and lists in place of blocks like:
twocount=0
onecount=0
halfcount=0
pttwocount=0
ptonecount=0
which can be elegantly and extensibly represented as:
denominations = [200, 100, 50, 10, 5, 2, 1]
used = {x: 0 for x in denominations}
In terms of efficiency, you can use math to handle amounts for each denomination in one fell swoop. Divide the remaining amount by each available denomination in descending order to determine how many of each coin will be chosen and subtract accordingly. For each denomination, we can now write a simple loop and eliminate branching completely:
for val in denominations:
used[val] += amount // val
amount -= val * used[val]
and print or show a final result of used like:
278 => {200: 1, 100: 0, 50: 1, 10: 2, 5: 1, 2: 1, 1: 1}
The end result of this is that we've reduced 27 lines down to 5 while improving efficiency, maintainability and dynamism.
By the way, if the denominations were a different currency, it's not guaranteed that this greedy approach will work. For example, if our available denominations are 25, 20 and 1 cents and we want to make change for 63 cents, the optimal solution is 6 coins (3x 20 and 3x 1). But the greedy algorithm produces 15 (2x 25 and 13x 1). Once you're comfortable with the greedy approach, research and try solving the problem using a non-greedy approach.
This question occurred to my while solving this problem.
NB. Find the next number whose prime factorization exponents
NB. match those of the given number.
exps=. /:~#{:#(__&q:)
f=. 3 : 0
target=. exps y
(>:^:(-.#(target-:exps))^:_) y+1
)
f 20 NB. 28
Note that in order to specify the while condition of the Do... While, I first had calculate the prime exponents of the argument y and save that answer to target. I was then able to write -.#(target-:exps) as the While condition.
This of course breaks the tacit style. So I'd like to know if there is a way to achieve the same thing that my verb above achieves, but do so as a single tacit verb?
The way I approached this was to think of f as the centre of a dyadic fork where the left argument is exps y which is the unchanging comparison target and the right argument is >: y which does the initial incrementing. The next step was to use ] at each ^: in f to keep exps monadic. The [ pulls in exps y from the left argument.
Written in tacit
exps=. /:~#{:#(__&q:)
ft=: exps >:#]^:([ -.#-: exps#])^:_ >:
ft 20
28
It's cool that 3 * 4 results in 12, and * 4 results in 1, but does using the same primitive for both operations ever provide a benefit? For example, let's say I were to define the following:
SIGNUM =: * : [:
TIMES =: [: : *
If I were to only ever use SIGNUM and TIMES instead of *, would I ever miss out on a clever use of *? That is, x TIMES y seems to be exactly the same as x * y for every x I can imagine (although my imagination is pretty limited in this regard). Is there an x where x * y produces the same result as SIGNUM y?
In case * : [: isn't immediately clear, the following should illustrate:
SIGNUM =: * : [:
TIMES =: [: : *
SIGNUM 4
1
3 TIMES 4
12
* 4
1
3 * 4
12
3 SIGNUM 4
|domain error: SIGNUM
| 3 SIGNUM 4
TIMES 4
|domain error: TIMES
| TIMES 4
Let's write conclusions from the comments down:
There is no direct language-level reason not to use names for primitives
Using names instead of primitives can however harm performance, as special code does not necessarily get triggered. I think this can be remedied by fixing verbs after building them with f..
The reason for having the same name for monadic and dyadic verbs is historical: APL used it before. Most verbs have a related actions in monadic / dyadic versions and inflections (a number of trailing dots and colons).
For instance, ^ can be expressed in traditional notation as pow(x,y) or exp(y) where x and y are left and right arguments, and e is Euler's constant. Here, the monadic version is the same as the dyadic version, with a sensible default left argument. Different inflections of the same root are all power-related verbs:
- ^. does logarithms (base e for the monad)
- ^: does Power conjunction, applying a verb a variable number of times.
Other relations between monadic and dyadic verbs can also exist, for example $ can be said to get or set the Shape of an array, depending on whether it is used as monad or dyad.
That said, I think that once one gets a bit of experience with J, it becomes easier to spot which valence a verb has based on the sentence it is used in. Examples are:
Monad # Ambiv NB. Mv is always used monadically, Av depends on arguments
Ambiv & Monad
(Dyad Monad) NB. A hook, where verb 1 is always dyadic
(Ambiv Dyad Ambiv) NB. A fork, the middle is one always dyadic
It was probably a mistake to use the same symbols for dyadic and monadic built-ins except for those where the monadic case is a default parameter to the dyad.
TIMES =: 1&$: : *
would be a good defnition that doesn't give an error.
As for ambivalent cases,
(3 * TIMES) 4
12
2 (3 * TIMES) 4
24
Another useful ambivalent verb is:
TIMESORSQUARE =: *~
*~ 3
9
2 *~ 3
6
I am looking for a library that provides a 'value with error' (eg x ± y). But searching for "Haskell xyz Error" only gives error handling libraries.
I would expect that such a library would provide common math operations (Num, Floating) where appropriate. The use case would be to get a error estimate from a calculation based on noisy sensor readings.
Update
I did some research and the term "propagation of uncertainty" came up. I found uncertainly-haskell which I'll try out soon. Are there other packages like this?
Have a look at the intervals package.
The Data.Eq.Approximate module seems to be a fit for getting approximate equality.
Data.Eq.Approximate
Contents
Type wrappers
Classes for tolerance type annotations
Absolute tolerance
Relative tolerance
Zero tolerance
Tolerance annotations using Digits
The purpose of this module is to provide newtype wrapper that allows one to effectively override the equality operator of a value so that it > is approximate rather than exact. For example, the type
type ApproximateDouble = AbsolutelyApproximateValue (Digits Five) Double
defines an alias for a wrapper containing Doubles such that two doubles are equal if they are equal to within five decimals of accuracy; for > example, we have that
1 == (1+10^^(-6) :: ApproximateDouble)
evaluates to True. Note that we did not need to wrap the value 1+10^^(-6) since AbsolutelyApproximateValue is an instance of Num. For > convenience, Num as well as many other of the numerical classes such as Real and Floating have all been derived for the wrappers defined in > this package so that one can conveniently use the wrapped values in the same way as one would use the values themselves.
Two kinds of wrappers are provided by this package.
The uncertain package seems to provide what you are looking for:
Some highlights from the readme:
Provides tools to manipulate numbers with inherent
experimental/measurement uncertainty, and propagates them through
functions based on principles from statistics.
Manipulate with error propagation
ghci> let x = 1.52 +/- 0.07
ghci> let y = 781.4 +/- 0.3
ghci> let z = 1.53e-1 `withPrecision` 3
ghci> cosh x
2.4 +/- 0.2
ghci> exp x / z * sin (y ** z)
10.9 +/- 0.9
ghci> pi + 3 * logBase x y
52 +/- 5
Create numbers
ghci> 1.52 +/- 0.07
1.52 +/- 7.0e-2
ghci> fromSamples [12.5, 12.7, 12.6, 12.6, 12.5]
12.58 +/- 7.0e-2
Comparisons
Note that this is very different from other libraries with similar
data types (like from intervals and rounding); these do not
attempt to maintain intervals or simply digit precisions; they instead
are intended to model actual experimental and measurement data with
their uncertainties, and apply functions to the data with the
uncertainties and properly propagating the errors with sound
statistical principles.
For a clear example, take
> (52 +/- 6) + (39 +/- 4)
91.0 +/- 7.0
In a library like intervals, this would result in 91 +/- 10
(that is, a lower bound of 46 + 35 and an upper bound of 58 + 43).
However, with experimental data, errors in two independent samples
tend to "cancel out", and result in an overall aggregate uncertainty
in the sum of approximately 7.