Scala count chars in a string logical error - string

here is the code:
val a = "abcabca"
a.groupBy((c: Char) => a.count( (d:Char) => d == c))
here is the result I want:
scala.collection.immutable.Map[Int,String] = Map(2 -> b, 2 -> c, 3 -> a)
but the result I get is
scala.collection.immutable.Map[Int,String] = Map(2 -> bcbc, 3 -> aaa)
why?
thank you.

Write an expression like
"abcabca".groupBy(identity).collect{
case (k,v) => (k,v.length)
}
which will give output as
res0: scala.collection.immutable.Map[Char,Int] = Map(b -> 2, a -> 3, c -> 2)

Let's dissect your initial attempt :
a.groupBy((c: Char) => a.count( (d:Char) => d == c))
So, you're grouping by something which is what ? the result of a.count(...), so the key of your Map will be an Int. For the char a, we will get 3, for the chars b and c, we'll get 2.
Now, the original String will be traversed and for the results accumulated, char by char.
So after traversing the first "ab", the current state is "2-> b, 3->c". (Note that for each char in the string, the .count() is called, which is a n² wasteful algorithm, but anyway).
The string is progressively traversed, and at the end the accumulated results is shown. As it turns out, the 3 "a" have been sent under the "3" key, and the b and c have been sent to the key "2", in the order the string was traversed, which is the left to right order.
Now, a usual groupBy on a list returns something like Map[T, List[T]], so you may have expected a List[Char] somewhere. It doesn't happen (because the Repr for String is String), and your list of chars is effectively recombobulated into a String, and is given to you as such.
Hence your final result !

Your question header reads as "Scala count chars in a string logical error". But you are using Map and you wanted counts as keys. Equal keys are not allowed in Map objects. Hence equal keys get eliminated in the resulting Map, keeping just one, because no duplicate keys are allowed. What you want may be a Seq of tuples like (count, char) like List[Int,Char]. Try this.
val x = "abcabca"
x.groupBy(identity).mapValues(_.size).toList.map{case (x,y)=>(y,x)}
In Scal REPL:
scala> x.groupBy(identity).mapValues(_.size).toList.map{case (x,y)=>(y,x)}
res13: List[(Int, Char)] = List((2,b), (3,a), (2,c))
The above gives a list of counts and respective chars as a list of tuples.So this is what you may really wanted.
If you try converting this to a Map:
scala> x.groupBy(identity).mapValues(_.size).toList.map{case (x,y)=>(y,x)}.toMap
res14: scala.collection.immutable.Map[Int,Char] = Map(2 -> c, 3 -> a)
So this is not what you want obviously.
Even more concisely use:
x.distinct.map(v=>(x.filter(_==v).size,v))
scala> x.distinct.map(v=>(x.filter(_==v).size,v))
res19: scala.collection.immutable.IndexedSeq[(Int, Char)] = Vector((3,a), (2,b), (2,c))

The problem with your approach is you are mapping count to characters. Which is:
In case of
val str = abcabca
While traversing the string str a has count 3, b has count 2 and c has count 2 while creating the map (with the use of groupBy) it will put all the characters in the value which has the same key that is.
Map(3->aaa, 2->bc)
That’s the reason you are getting such output for your program.
As you can see in the definition of the groupBy function:
def
groupBy[K](f: (A) ⇒ K): immutable.Map[K, Repr]
Partitions this traversable collection into a map of traversable collections according to some discriminator function.
Note: this method is not re-implemented by views. This means when applied to a view it will always force the view and return a new traversable collection.
K
the type of keys returned by the discriminator function.
f
the discriminator function.
returns
A map from keys to traversable collections such that the following invariant holds:
(xs groupBy f)(k) = xs filter (x => f(x) == k)
That is, every key k is bound to a traversable collection of those elements x for which f(x) equals k.
GroupBy returns a Map which holds the following invariant.
(xs groupBy f)(k) = xs filter (x => f(x) == k)
Which means it return collection of elements for which the key is same.

Related

How to find all two pairs of sets and elements in a collection using MapReduce in Spark?

I have a collection of sets, each set contains many items. I want to retrieve all pairs of sets and elements using Spark where each pair after reduce processing will contains two items and two sets
for example:
If I have this list of sets
Set A={1,2,3,4 }
Set B={1,2,4,5}
Set C= {2,3,5,6}
The map process will be:
(A,1)
(A,2)
(A,3)
(B,1)
(B,2)
(B,4)
(B,5)
(C,2)
(C,3)
(C,5)
(C,6)
The target result after reduce is:
(A B, 1 2) // since 1 2 exist in both A and B
(A B, 1 4)
(A B, 2 4)
(A C,2 3)
(B C,2 5)
here (A B,1 3) not in the result because 1 3 not exists in B
Could you help me to solve this problem in Spark in one map and one reduce functions in any language ( Python, Scala, or Java)?
Lets break this problem into multiple parts, I consider the transformation from input lists to map output trivial. So let us start from there,
That you have a list of (String, int) looking like
("A", 1)
("A", 2)
....
Lets forget you need 2 integer elements in result set first, and lets solve for getting intersection set between any 2 keys from the mapped output.
Result from your input would look like
(AB, Set(1,2,4))
(BC, Set(2,5))
(AC, Set(2,3))
To do this, first, extract all keys from your mapped output (mappedOutput) that is an RDD of (String, Int), convert to set, and get all combinations of 2 elements (I am using a stupid method here, a good way to do this that scales would be to use a combination generator)
val combinations = mappedOutput.map(x => x._1).collect.toSet
.subsets.filter(x => x.size == 2).toList
.map(x => x.mkString(""))
output would be List(ab,ac,bc), these combination codes will serve as keys to be joined.
convert mapped output to list of set key (a,b,c) => set of elements
val step1 = mappedOutput.groupByKey().map(x => (x._1, x._2.toSet))
Attach combination codes as key to step1
val step2 = step1.map(x => combinations.filter(y => y.contains(x._1)).map(y => (y, x))).flatMap(x => x)
output would be (ab, (a, set of elements in a)), (ac, (a, set of elements in a)) etc. Because of the filter, we will not attach combination code bc to set a.
Now obtain the result I want using a reduce
val result = step2.reduceByKey((a, b) => ("", a.intersect(b))).map(x => (x._1, x._2._2))
So we have the output I mentioned we want at the start now. What is left is to transform this result to what you need, which is very simple to do.
val transformed = result.map(x => x._2.subsets.filter(x => x.size == 2).map(y => (x._1, y.mkString(" ")))).flatMap(x => x)
end :)

count number of chars in String

In SML, how can i count the number of appearences of chars in a String using recursion?
Output should be in the form of (char,#AppearenceOfChar).
What i managed to do is
fun frequency(x) = if x = [] then [] else [(hd x,1)]#frequency(tl x)
which will return tupels of the form (char,1). I can too eliminate duplicates in this list, so what i fail to do now is to write a function like
fun count(s:string,l: (char,int) list)
which 'iterates' trough the string incrementing the particular tupel component. How can i do this recursively? Sorry for noob question but i am new to functional programming but i hope the question is at least understandable :)
I'd break the problem into two: Increasing the frequency of a single character, and iterating over the characters in a string and inserting each of them. Increasing the frequency depends on whether you have already seen the character before.
fun increaseFrequency (c, []) = [(c, 1)]
| increaseFrequency (c, ((c1, count)::freqs)) =
if c = c1
then (c1, count+1)
else (c1,count)::increaseFrequency (c, freqs)
This provides a function with the following type declaration:
val increaseFrequency = fn : ''a * (''a * int) list -> (''a * int) list
So given a character and a list of frequencies, it returns an updated list of frequencies where either the character has been inserted with frequency 1, or its existing frequency has been increased by 1, by performing a linear search through each tuple until either the right one is found or the end of the list is met. All other character frequencies are preserved.
The simplest way to iterate over the characters in a string is to explode it into a list of characters and insert each character into an accumulating list of frequencies that starts with the empty list:
fun frequencies s =
let fun freq [] freqs = freqs
| freq (c::cs) freqs = freq cs (increaseFrequency (c, freqs))
in freq (explode s) [] end
But this isn't a very efficient way to iterate a string one character at a time. Alternatively, you can visit each character by indexing without converting to a list:
fun foldrs f e s =
let val len = size s
fun loop i e' = if i = len
then e'
else loop (i+1) (f (String.sub (s, i), e'))
in loop 0 e end
fun frequencies s = foldrs increaseFrequency [] s
You might also consider using a more efficient representation of sets than lists to reduce the linear-time insertions.

Scala String Similarity

I have a Scala code that computes similarity between a set of strings and give all the unique strings.
val filtered = z.reverse.foldLeft((List.empty[String],z.reverse)) {
case ((acc, zt), zz) =>
if (zt.tail.exists(tt => similarity(tt, zz) < threshold)) acc
else zz :: acc, zt.tail
}._1
I'll try to explain what is going on here :
This uses a fold over the reversed input data, starting from the empty String (to accumulate results) and the (reverse of the) remaining input data (to compare against - I labeled it zt for "z-tail").
The fold then cycles through the data, checking each entry against the tail of the remaining data (so it doesn't get compared to itself or any earlier entry)
If there is a match, just the existing accumulator (labelled acc) will be allowed through, otherwise, add the current entry (zz) to the accumulator. This updated accumulator is paired with the tail of the "remaining" Strings (zt.tail), to ensure a reducing set to compare against.
Finally, we end up with a pair of lists: the required remaining Strings, and an empty list (no Strings left to compare against), so we take the first of these as our result.
The problem is like in first iteration, if 1st, 4th and 8th strings are similar, I am getting only the 1st string. Instead of it, I should get a set of (1st,4th,8th), then if 2nd,5th,14th and 21st strings are similar, I should get a set of (2nd,5th,14th,21st).
If I understand you correctly - you want the result to be of type List[List[String]] and not the List[String] you are getting now - where each item is a list of similar Strings (right?).
If so - I can't see a trivial change to your implementation that would achieve this, as the similar values are lost (when you enter the if(true) branch and just return the acc - you skip an item and you'll never "see" it again).
Two possible solutions I can think of:
Based on your idea, but using a 3-Tuple of the form (acc, zt, scanned) as the foldLeft result type, where the added scanned is the list of already-scanned items. This way we can refer back to them when we find an element that doesn't have preceeding similar elements:
val filtered = z.reverse.foldLeft((List.empty[List[String]],z.reverse,List.empty[String])) {
case ((acc, zt, scanned), zz) =>
val hasSimilarPreceeding = zt.tail.exists { tt => similarity(tt, zz) < threshold }
val similarFollowing = scanned.collect { case tt if similarity(tt, zz) < threshold => tt }
(if (hasSimilarPreceeding) acc else (zz :: similarFollowing) :: acc, zt.tail, zz :: scanned)
}._1
A probably-slower but much simpler solution would be to just groupBy the group of similar strings:
val alternative = z.groupBy(s => z.collect {
case other if similarity(s, other) < threshold => other
}.toSet ).values.toList
All of this assumes that the function:
f(a: String, b: String): Boolean = similarity(a, b) < threshold
Is commutative and transitive, i.e.:
f(a, b) && f(a. c) means that f(b, c)
f(a, b) if and only if f(b, a)
To test both implementations I used:
// strings are similar if they start with the same character
def similarity(s1: String, s2: String) = if (s1.head == s2.head) 0 else 100
val threshold = 1
val z = List("aa", "ab", "c", "a", "e", "fa", "fb")
And both options produce the same results:
List(List(aa, ab, a), List(c), List(e), List(fa, fb))

Scala Comprehension Errors

I am working on some of the exercism.io exercises. The current one I am working on is for Scala DNA exercise. Here is my code and the errors that I am receiving:
For reference, DNA is instantiated with a strand String. This DNA can call count (which counts the strand for the single nucleotide passed) and nucletideCounts which counts all of the respective occurrences of each nucleotide in the strand and returns a Map[Char,Int].
class DNA(strand:String) {
def count(nucleotide:Char): Int = {
strand.count(_ == nucleotide)
}
def nucleotideCounts = (
for {
n <- strand
c <- count(n)
} yield (n, c)
).toMap
}
The errors I am receiving are:
Error:(10, 17) value map is not a member of Int
c <- count(n)
^
Error:(12, 5) Cannot prove that Char <:< (T, U). ).toMap
^
Error:(12, 5) not enough arguments for method toMap: (implicit ev:
<:<[Char,(T, U)])scala.collection.immutable.Map[T,U]. Unspecified
value parameter ev. ).toMap
^
I am quite new to Scala, so any enlightenment on why these errors are occurring and suggestions to fixing them would be greatly appreciated.
for comprehensions work over Traversable's that have flatMap and map methods defined, as the error message is pointing out.
In your case count returns with a simple integer so no need to "iterate" over it, just simply add it to your result set.
for {
n <- strand
} yield (n, count(n))
On a side note this solution is not too optimal as in the case of a strand AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA count is going to be called many times. I would recommend calling toSet so you get the distinct Chars only:
for {
n <- strand.toSet
} yield (n, count(n))
In line with Akos's approach, consider a parallel traversal of a given strand (String),
strand.distinct.par.map( n => n -> count(n) )
Here we use distinct to gather unique items and construct each Map association in map.
A pipeline solution would look like:
def nucleotideCounts() = strand.groupBy(identity).mapValues(_.length)
Another approach is
Map() ++ {for (n <- strand; c = count(n)) yield n->c}
Not sure why it's different than {...}.toMap() but it gets the job done!
Another way to go is
Map() ++ {for (n <- strand; c <- Seq(count(n))) yield n->c}

How to minimize a string's length by iteratively removing all occurrences of some specified words from the string

This question appeared in a programming contest and we still have no idea how to solve it.
Question:
Given a string S and a list of strings L, we want to keep removing all occurences of substrings that may be in L. And we have to minimize the length of the final string formed. Also note that removal of a string may initiate more removals.
For example,
S=ccdedefcde, L={cde}
then answer = 1. Because we can reduce S by ccdedefcde -> cdefcde -> fcde -> f.
S=aabaab, L={aa, bb} then answer = 0 as reduction can be carried out by aabaab -> aabb -> aa -> ‘Empty String’
S=acmmcacamapapc, L={mca, pa} then answer=6 as reduction can be carried out by acmmcacamapapc-> acmcamapapc -> acmapapc -> acmapc.
The maximum length of S can be 50 and the maximum length of list L can be 50.
My approach is a basic recursive traversal in which I return the minimum length that I can get by removing different sub-strings. Unfortunately this recursive approach will time out in the worst case input as we have 50 options at each step and the recursion depth is 50.
Please suggest an efficient algorithm that may solve this problem.
Here's a polynomial-time algorithm that yields optimal results. Since it's convenient for me, I'm going to use the polynomial-time CYK algorithm as a subroutine, specifically the extension that computes a minimum-weight parse of a string according to a context-free grammar with weighted productions.
Now we just have to formalize this problem with a context-free grammar. The start symbol is A (usually S, but that's taken already), with the following productions.
A -> N (weight 0)
A -> A C N (weight 0)
I'll explain N shortly. If N and C were terminals, then A would accept the regular language N (C N)*. The nonterminal C matches a single terminal (character).
C -> a (weight 1)
C -> b (weight 1)
C -> c (weight 1)
...
The nonterminal N matches strings that are nullable, that is, strings that can be reduced to the empty string by deleting strings in L. The base case is obvious.
N -> (weight 0)
We also have a production for each element of L. When L = {mca, pa}, for example, we have the following productions.
N -> N m N c N a N (weight 0)
N -> N p N a N (weight 0)
I hope that it's clear how to construct the one-to-one correspondence between iterative removals and parses, where the parse weight is equal to the length of the residual string.
Note: this is not an optimal solution, since it doesn't work for the example S=ABAABABAA, L={ABA}
Algorithm
RECURSIVE_FUNCTION ( STRING STR, STRING PATTERN) :
1. STRING LEFT = STR.SUBSTR (0, STR.FIND(PATTERN))
2. STRING RIGHT = STR.SUBSTR(STR.FIND(PATTERN), STR.LENGTH)
3. IF (RIGHT is empty) THEN RETURN LEFT
4. STRING FIN = RECUR(LEFT) + RECUR(RIGHT)
5. RETURN RECUR(FIN)
function SUBSTR(A,B) will return substring of the string, from index A inclusive to index B exclusive
Operation A + B is concatenation of string A and B
function RECUR(A) call the same function, aka recurrence
Example: ccdedefcde
First it will branch down with RECUR(LEFT) + RECUR(RIGHT):
c[cde]defcde
/ \
c def[cde]
/
def
Then it will RECUR(FIN) on merge:
cdef*
/ \
c def
/
def
* will RECUR to do the following before that MERGE completes:
[cde]f
\
f
and finally the ROOT call returns f

Resources