How do i put a specific image to a random Imagebutton? - android-studio

So I'm working on an app in Android Studio(kotlin language)I have 3x3 Imagebuttons and a simple "start" button.How do I put a specific image(for example:number_1) in a random place in the 3x3 area when i hit the start button.
val images: MutableList<Int> = mutableListOf(
number_1, number_2, number_3,
number_4, number_5, number_6, number_7, number_8, number_9
)
val start: Button = findViewById(R.id.button)
val button1: ImageButton = findViewById(R.id.imageButton)
val button2: ImageButton = findViewById(R.id.imageButton2)
val button3: ImageButton = findViewById(R.id.imageButton3)
val button4: ImageButton = findViewById(R.id.imageButton4)
val button5: ImageButton = findViewById(R.id.imageButton5)
val button6: ImageButton = findViewById(R.id.imageButton6)
val button7: ImageButton = findViewById(R.id.imageButton7)
val button8: ImageButton = findViewById(R.id.imageButton8)
val button9: ImageButton = findViewById(R.id.imageButton9)
val buttons = arrayOf(
button1, button2, button3, button4, button5,
button6, button7, button8, button9)

Maybe
val i = Random.nextInt(buttons.size)
buttons[i].setImageResource(number_1)
Or (as #m0skit0 wrote):
buttons.random().setImageResource(number_1)
And set listener:
start.setOnClickListener {
// Code above
}

Related

Unable to Parse from String to Int with in Case Class

can some one help me where i am exactly missing with this code? I am unable to parse the phone from String to Integer
case class contactNew(id:Long,name:String,phone:Int,email:String)
val contactNewData = Array("1#Avinash#Mob-8885453419#avinashbasetty#gmail.com","2#rajsekhar#Mob-9848022338#raj#yahoo.com","3#kamal#Mob-98032446443#kamal#gmail.com")
val contactNewDataToRDD = spark.sparkContext.parallelize(contactNewData)
val contactNewRDD = contactNewDataToRDD.map(l=> {
val contactArray=l.split("#")
val MobRegex=contactArray(2).replaceAll("[a-zA-Z/-]","")
val MobRegex_Int=MobRegex.toInt
contactNew(contactArray(0).toLong,contactArray(1),MobRegex_Int,contactArray(3))
})
contactNewRDD.collect.foreach(println)
you are getting error as the size of last phone number is greater than int size.
convert the number too Long. it should work.
case class contactNew(id:Long,name:String,phone:Long,email:String)
val contactNewData =
Array("1#Avinash#Mob8885453419#avinashbasetty#gmail.com",
"2#rajsekhar#Mob-9848022338#raj#yahoo.com",
"3#kamal#Mob-98032446443#kamal#gmail.com")
val contactNewDataToRDD = spark.sparkContext.parallelize(contactNewData)
val contactNewRDD = contactNewDataToRDD.map(l=>
{
val contactArray=l.split("#")
val MobRegex=contactArray(2).replaceAll("[a-zA-Z/-]","")
val MobRegex_Int=MobRegex.toLong
contactNew(contactArray(0).toLong,contactArray(1),MobRegex_Int,contactArray(3))
}
)
contactNewRDD.collect.foreach(println)

why the data has changed after convert to parquet format testing by union two dataframe?

I wrote a function to operation on a csv file, to convert it to parquet format.
and I wonder how to make sure the data is the same,not lost or add.
So I wrote a test for it. But it turns out they are not the same:
My logic is:
1) make the csv to dataframe A.
2)and make the dataframe A to parquet format ,save to a dir.
3)read the parquet file to be a new dataframe B.
4)then A.union(B).
5)count the A and B and A.union(B).
If the three are the same ,then I can get to the conclusion that they are the same data.
But I get third one different.
def doJob(sc: SparkContext, data: RDD[String]): DataFrame = {
logInfo("Extracting omniture data")
val result = data
.filter(_.contains("PAGE."))
.filter(_.contains(".PACKAGE"))
val sqlsqlContext = new SQLContext(sc)
//just ignore above codes...
val packagesCsvDF = sqlsqlContext.load("com.databricks.spark.csv", Map("path" -> "file:///D:/test/testsample.csv", "header" -> "true"))
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
import sqlContext.implicits._
//
// // we should have some additional filter here
// val mydf = packagesDF.groupBy($"page_url").agg(last($"pagename"),last($"prop46"),last($"prop56"),last($"post_evar34"))
// logInfo("show mydf")
// mydf.show()
//TODO
// save files
logInfo("Saving omniture packages data to S3")
if (true) {
packagesCsvDF
.repartition(sc.defaultParallelism, col("pagename"))
.write
.mode(SaveMode.Append)
.partitionBy("pagename")
.parquet("file:///D:/test/parquet")
logInfo("packagesDF")
}
packagesCsvDF//Is this packagesCsvDF have not been changed yet??????
}
TEST:
object ParquetDataTestsSpec {
def main (args: Array[String] ): Unit = {
val sc = new SparkContext(new SparkConf().setAppName("parquet data test Logs").setMaster("local"))
val input = PackagesOmnitureMapReduceJob.formatToJson(sc.textFile("file:///D:/test/option.json", sc.defaultParallelism))
val df = PackagesOmnitureMapReduceJob.doJob(sc, input)//call the function I want to test in "file:///D:/test/parquet"
val sqlContext = new SQLContext(sc)
val SourceCSVDF = sqlContext.load("com.databricks.spark.csv", Map("path" -> "file:///D:/test/testsample.csv", "header" -> "true"))// original
val parquetDataFrame = sqlContext.read.parquet("file:///D:/test/parquet") //get the new dataframe
val dfCount = df.count()
val SourceCSVDFcount = SourceCSVDF.count()
val parquetDataCount = parquetDataFrame.count()
val unionCount = parquetDataFrame.union(SourceCSVDF).count()
println(dfCount,SourceCSVDFcount,parquetDataCount,unionCount)
}
}
print:
(200,200,200,400)
then I try to parse all the dataframe to json:
parquetDataFrame.write.json("file:///D:/test/parquetDataFrame")
SourceCSVDF.write.json("file:///D:/test/SourceCSVDF")
df.write.json("file:///D:/test/Desktop/df")
and when I open the json files, I find they are so all same..Is the problem is coming with the key word union?
val unionalldis3 = parquetDataFrame.unionAll(SourceCSVDF).distinct().count()
then it is right...
But I am very confused.I thought union() is the distincted unionAll....

How to get Int values from Hbase using Hbase API getValue method

I am trying to fetch values from Hbase using the column names and below is my code:
val cf = Bytes.toBytes("cf")
val tkn_col_num = Bytes.toBytes("TKN_COL_NUM")
val tkn_col_val = Bytes.toBytes("TKN_COL_VAL")
val col_name = Bytes.toBytes("COLUMN_NAME")
val sc = new SparkContext("local", "hbase-test")
val conf = HBaseConfiguration.create()
conf.set(TableInputFormat.INPUT_TABLE, input_table)
conf.set(TableInputFormat.SCAN_COLUMNS, "cf:COLUMN_NAME cf:TKN_COL_NUM cf:TKN_COL_VAL")
val hBaseRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat], classOf[ImmutableBytesWritable], classOf[Result])
hBaseRDD.map{case (x,y) => (y)}.collect().foreach(println)
val colMap : Map[String,(Int,String)] = hBaseRDD.map{case (x,y) =>
((Bytes.toString(y.getValue(cf,col_name))),
(
(Bytes.toInt(y.getValue(cf,tkn_col_num))),
(Bytes.toString(y.getValue(cf,tkn_col_val)))
))
}.collect().toMap
colMap.foreach(println)
sc.stop()
Now the Bytes.toString(y.getValue(cf,col_name)) works and I get the expected column names from table however Bytes.toInt(y.getValue(cf,tkn_col_num))) gives me some random values(I guess it is offset values for the cell but I am not sure on it.). Below is the output that I am getting:
(COL1,(-2147483639,sum))
(COL2,(-2147483636,sum))
(COL3,(-2147483645,count))
(COL4,(-2147483642,sum))
(COL5,(-2147483641,sum))
The integer values should be 1,2,3,4,5. Can anyone please guide me how can I get true integer column data.
Thanks

The StreamingContext.textfilestream file is read, but no results are displayed on the console

def textfile={
val ssc = new StreamingContext(conf, Seconds(10))
val lines = ssc.textFileStream("hdfs://master:9000/streaming/")
val words = lines.flatMap(_.split("\\s"));
val pairs = words.map(word => (word, 1));
val wordCounts = pairs.reduceByKey(_ + _);
wordCounts.print();
ssc.start();
ssc.awaitTermination();
}
The results do not show up
textFileStream only scans the new files after you start the streaming application. If you want to scan the existing files, you can use the following workaround:
fileStream[LongWritable, Text, TextInputFormat](
directory,
filter = path => !path.getName().startsWith("."),
newFilesOnly = false).map(_._2.toString)

How two RDD according to funcation get Result RDD

I am a beginner of Apache Spark. I want to filter two RDD into result RDD with the below code
def runSpark(stList:List[SubStTime],icList:List[IcTemp]): Unit ={
val conf = new SparkConf().setAppName("OD").setMaster("local[*]")
val sc = new SparkContext(conf)
val st = sc.parallelize(stList).map(st => ((st.productId,st.routeNo),st)).groupByKey()
val ic = sc.parallelize(icList).map(ic => ((ic.productId,ic.routeNo),ic)).groupByKey()
//TODO
//val result = st.join(ic).mapValues( )
sc.stop()
}
here is what i want to do
List[ST] ->map ->Map(Key,st) ->groupByKey ->Map(Key,List[st])
List[IC] ->map ->Map(Key,ic) ->groupByKey ->Map(Key,List[ic])
STRDD join ICRDD get Map(Key,(List[st],List[ic]))
I have a function compare listST and listIC get the List[result] result contains both SubStTime and IcTemp information
def calcIcSt(st:List[SubStTime],ic:List[IcTemp]): List[result]
I don't know how to use mapvalues or other some way to get my result
Thanks
val result = st.join(ic).mapValues( x => calcIcSt(x._1,x._2) )

Resources