Using Apache Spark to find frequent contiguous - apache-spark

How can I use Apache Spark to find contiguous sequences

Try taking inital string, splitting it into unique subsequences of different length, then broadcasting intial sequence over them and filtering matched. Something like this would work in a spark-shell
val s = "AATTGTGTGTGTGATTTTTTAATG" //your string
val s_broadcast = sc.broadcast(s) //broadcast version
val A = 2 // min length of substring
val B = 3 // max length of substring
val C = 3 // min support
val L = s.size //length of the string
sc.parallelize(
for{
i <- A to B
j <- 0 to (L - i)
} yield (j,i+j)
) // generating paris of substrings
.map{case(j,i)=>s_broadcast.value.substring(j,i)}
.distinct // if optimization is needed, this step is a place to start
.filter(x=>s_broadcast.value.indexOf(x*C)>=0)
.collect
.map(_*C)
EDITED:
as an after though - here's code which will return LONGEST substrings. The previous code has C fixed, this one tries longest.
val s = "AATTGTGTGTGTGTGATTTTTTAATG" //your string
val s_broadcast = sc.broadcast(s) //broadcast version
val A = 2 // min length of substring
val B = 3 // max length of substring
val C = 3 // min support
val L = s.size //length of the string
sc.parallelize(
for{
i <- A to B
j <- 0 to (L - i)
} yield (j,i+j)
) // generating paris of substrings
.map{case(j,i)=>s_broadcast.value.substring(j,i)}
.distinct // if optimization is needed, this step is a place to start
.flatMap(x=>
for{
v <- C to L/A
} yield x->v
) //making "AB"->3 pairs, which will result in search for "ABABAB"
.filter{case(x,v)=>s_broadcast.value.indexOf(x*v)>=0}
.groupByKey //grouping same substrings of different length
.map{case(k,v)=>k->v.max} //getting longer substring
.collect //bringing substring to the driver
.map{case(k,v)=>k*v}

Related

scala - string parsing without Regex

I have various types of strings like the following:
sales_data_type
saledatatypes
sales_data.new.metric1
sales_data.type.other.metric2
sales_data.type3.metric3
I'm trying to parse them to get a substring with a word before and after the last dot. For example: new.metric1, other.metric2, type3.metric3. If a word doesn't contain dots, it has to be returned as is: sales_data_type, saledatatypes.
With a Regex it may be done this way:
val infoStr = "sales_data.type.other.metric2"
val pattern = ".*?([^.]+\\.)?([^.]+)$"
println(infoStr.replaceAll(pattern, "$1$2"))
// prints other.metric2
// for saledatatypes just prints nullsaledatatypes ??? but seems to work
I want to find a way to achieve this with Scala, without using Regex in order to expand my understanding of Scala features. Will be grateful for any ideas.
One-liner:
dataStr.split('.').takeRight(2).mkString(".")
takeRight(2) will take the last 2 if there are 2 to take, else it will take the last, and only, 1. mkString(".") will re-insert the dot only if there are 2 elements for the dot to go between, else it will leave the string unaltered.
Here's one with lots of scala features for you.
val string = "head.middle.last"
val split = string.split('.') // Array(head, middle, last)
val result = split.toSeq match {
case Seq(word) ⇒ word
case _ :+ before :+ after ⇒ s"$before.$after"
}
println(result) // middle.last
First we split the string on your . and get individual parts.
Then we pattern match those parts, first to check if there is only one (in which case we just return it), and second to grab the last two elements in the seq.
Finally we put a . back in between those last two using string interpolation.
One way of doing it:
val value = "sales_data.type.other.metric2"
val elems = value.split("\\.").toList
elems match {
case _:+beforeLast:+last => s"${beforeLast}.${last}"
case _ => throw new NoSuchElementException
}
for(s<-strs) yield {val s1 = s.split('.');
if(s1.size>=2)s1.takeRight(2).mkString(".") else s }
or
for(s<-strs) yield { val s1 = s.split('.');
if(s1.size>=2)s1.init.last+'.'+s1.last else s }
In Scala REPL:
scala> val strs =
Vector("sales_data_type","saledatatypes","sales_data.new.metric1","sales_data.type.other.metric2","sales_d
ata.type3.metric3")
strs: scala.collection.immutable.Vector[String] = Vector(sales_data_type, saledatatypes, sales_data.new.metric1, sales_data.
type.other.metric2, sales_data.type3.metric3)
scala> for(s<-strs) yield { val s1 = s.split('.');if(s1.size>=2)s1.takeRight(2).mkString(".") else s }
res62: scala.collection.immutable.Vector[String] = Vector(sales_data_type, saledatatypes, new.metric1, other.metric2, type3.
metric3)
scala> for(s<-strs) yield { val s1 = s.split('.');if(s1.size>=2)s1.init.last+'.'+s1.last else s }
res60: scala.collection.immutable.Vector[String] = Vector(sales_data_type, saledatatypes, new.metric1, other.metric2, type3.
metric3)
Use scala match and do like this
def getFormattedStr(str:String):String={
str.contains(".") match{
case true=>{
val arr=str.split("\\.")
val len=arr.length
len match{
case 1=>str
case _=>arr(len-2)+"."+arr(len-1)
}
}
case _=>str
}
}

RDD of Tuple and RDD of Row differences

I have two different RDDs and apply a foreach on both of them and note a difference that I cannot resolve.
First one:
val data = Array(("CORN",6), ("WHEAT",3),("CORN",4),("SOYA",4),("CORN",1),("PALM",2),("BEANS",9),("MAIZE",8),("WHEAT",2),("PALM",10))
val rdd = sc.parallelize(data,3) // NOT sorted
rdd.foreach{ x => {
println (x)
}}
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[103] at parallelize at command-325897530726166:8
Works fine in this sense.
Second one:
rddX.foreach{ x => {
val prod = x(0)
val vol = x(1)
val prt = counter
val cnt = counter * 100
println(prt,cnt,prod,vol)
}}
rddX: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[128] at rdd at command-686855653277634:51
Works fine.
Question: why can I not do val prod = x(0) as in the second case on the first example? And how could I do that with the foreach? Or would we need to use map for the first case always? Due to Row internals on the second example?
As you can see the difference in datatypes
First one is RDD[(String, Int)]
This is an RDD of Tuple2 which contains (String, Int) so you can access this as val prod = x._1 for first value as String and x._2 for second Integer value.
Since it is a Tuple you can't access as val prod = x(0)
and second one is RDD[org.apache.spark.sql.Row] which can be access a
val prod = x.getString(0) or val prod = x(0)
I hope this helped!

How to generate random vector in Spark

I want to generate random vectors with norm 1 in Spark.
Since the vector could be very large, I want it to be distributed, And since data in RDD has no order, I want to store the vector in the form of RDD[(Int, Double)], because I also need to use this vector to do some matrix-vector multiplication.
So how could I generate this kind of vector?
Here is my plan for now:
val v = normalRDD(sc, n, NUM_NODE)
val mod = GetMod(v) // Get the modularity of v
val res = v.map(x => x / mod)
val arr:Array[Double] = res.toArray()
var tuples = new List[(Int, Double)]()
for (i <- 0 to (arr.length - 1)) {
tuples = (i, arr(i)) :: tuples
}
// Get the entries and length of the vector.
entries = sc.parallelize(tuples)
length = arr.length
I think it not elegant enough because it goes through a "distributed -> single node -> distributed" process.
Is there any way better? Thanks:D
try this:
import scala.util.Random
import scala.math.sqrt
val n = 5 // insert length of your array here
val randomRDD = sc.parallelize(for (i <- 0 to n) yield (i, Random.nextDouble))
val norm = sqrt(randomRDD.map(x => x._2 * x._2).sum())
val finalRDD = randomRDD.mapValues(x => x/norm)
You can use this function to generate a random vector, then you can normalise it by dividing each element on the sum() of the vector, or by using a normalizer.

Scala: How to count occurrences of unique items in a certain index?

I have a list that is formatted like the lists below:
List(List(21, Georgetown, Male),List(29, Medford, Male),List(18, Manchester, Male),List(27, Georgetown, Female))
And I need to count the occurrences of each unique town name, then return the town name and the amount of times it was counted. But I only want to return the one town that had the most occurences. So if I applied the function to the list above, I would get
(Georgetown, 2)
I'm coming from Java, so I know how to do this process in a longer way, but I want to utilize some of Scala's built in methods.
scala> val towns = List(
| List(21, "Georgetown", "Male"),
| List(29, "Medford", "Male"),
| List(18, "Manchester", "Male"),
| List(27, "Georgetown", "Female"))
towns: List[List[Any]] = ...
scala> towns.map({ case List(a, b, c) => (b, c) }).groupBy(_._1).mapValues(_.length).maxBy(_._2)
res0: (Any, Int) = (Georgetown,2)
This is a pretty weird structure, but a way to do it would be with:
val items : List[List[Any]] = List(
List(List(21, "Georgetown", "Male")),
List(List(29, "Medford", "Male")),
List(List(18, "Manchester", "Male")),
List(List(27, "Georgetown", "Female"))).map(_.flatten)
val results = items.foldLeft(Map[String,Int]()) {
(acc,item) =>
val key = item(1).asInstanceOf[String]
val count = acc.getOrElse(key, 0 )
acc + (key -> (count + 1))
}
println(results)
Which produces:
Map(Georgetown -> 2, Medford -> 1, Manchester -> 1)

Detecting the index in a string that is not printable character with Scala

I have a method that detects the index in a string that is not printable as follows.
def isPrintable(v:Char) = v >= 0x20 && v <= 0x7E
val ba = List[Byte](33,33,0,0,0)
ba.zipWithIndex.filter { v => !isPrintable(v._1.toChar) } map {v => v._2}
> res115: List[Int] = List(2, 3, 4)
The first element of the result list is the index, but I wonder if there is a simpler way to do this.
If you want an Option[Int] of the first non-printable character (if one exists), you can do:
ba.zipWithIndex.collectFirst{
case (char, index) if (!isPrintable(char.toChar)) => index
}
> res4: Option[Int] = Some(2)
If you want all the indices like in your example, just use collect instead of collectFirst and you'll get back a List.
For getting only the first index that meets the given condition:
ba.indexWhere(v => !isPrintable(v.toChar))
(it returns -1 if nothing is found)
You can use directly regexp to found unprintable characters by unicode code points.
Resource: Regexp page
In such way you can directly filter your string with such pattern, for instance:
val text = "this is \n sparta\t\r\n!!!"
text.zipWithIndex.filter(_._1.matches("\\p{C}")).map(_._2)
> res3: Vector(8, 16, 17, 18)
As result you'll get Vector with indices of all unprintable characters in String. Check it out
If desired only the first occurrence of non printable char
Method span applied on a List delivers two sublists, the first where all the elements hold a condition, the second starts with an element that falsified the condition. In this case consider,
val (l,r) = ba.span(b => isPrintable(b.toChar))
l: List(33, 33)
r: List(0, 0, 0)
To get the index of the first non printable char,
l.size
res: Int = 2
If desired all the occurrences of non printable chars
Consider partition of a given List for a criteria. For instance, for
val ba2 = List[Byte](33,33,0,33,33)
val (l,r) = ba2.zipWithIndex.partition(b => isPrintable(b._1.toChar))
l: List((33,0), (33,1), (33,3), (33,4))
r: List((0,2))
where r includes tuples with non printable chars and their position in the original List.
I am not sure whether list of indexes or tuples is needed and I am not sure whether 'ba' needs to be an list of bytes or starts off as a string.
for { i <- 0 until ba.length if !isPrintable(ba(i).toChar) } yield i
here, because people need performance :)
def getNonPrintable(ba:List[Byte]):List[Int] = {
import scala.collection.mutable.ListBuffer
var buffer = ListBuffer[Int]()
#tailrec
def go(xs: List[Byte], cur: Int): ListBuffer[Int] = {
xs match {
case Nil => buffer
case y :: ys => {
if (!isPrintable(y.toChar)) buffer += cur
go(ys, cur + 1)
}
}
}
go(ba, 0)
buffer.toList
}

Resources