Retrieve nested Data from Firebase Database android - android-studio

Snapshot of my firebase realtime database
I want to extract the entire data under the "Orders" node, please tell me how should I model my data class for android in Kotlin?
I tried with this type of modeling,
After getting the reference of (Orders/uid/)
Order.kt
data class Order(
val items:ArrayList<Myitems>=ArrayList(),
val timeStamp:Long=0,
val totalCost:Int=0
)
MyItems.kt
data class MyItems(
val Item:ArrayList<Menu>=ArrayList()
)
Menu.kt
data class Menu(
val menCategory:String="",
val menName:String="",
val menImage:String="",
val menId:String="",
val menQuantity:Int=0,
val menCost:Int=0
)

After a lot of thinking and research online. I was finally able to model my classes and call add value event listener to it. Here it goes:
Order.kt
data class Order(
val items: ArrayList<HashMap<String, Any>> = ArrayList(),
val timeStamp: Long = 0,
val totalCost: Int = 0
)
OItem.kt
data class OItem(
val menCategory: String = "",
val menId: String = "",
val menImage: String = "",
val menName: String = "",
val menPrice: Int = 0,
var menQuantity: Int = 0
)
MainActivity.kt
val uid = FirebaseAuth.getInstance().uid
val ref = FirebaseDatabase.getInstance().getReference("Orders/$uid")
ref.addListenerForSingleValueEvent(object : ValueEventListener {
override fun onCancelled(error: DatabaseError) {
//
}
override fun onDataChange(p0: DataSnapshot) {
p0.children.forEach {
val order = it.getValue(Order::class.java)
ordList.add(order!!)
}
Log.d("hf", ordList.toString())
}
})

Related

Is it possible to write a dataframe into 2 files of different type?

We can use following api to write dataframe into local files.
df.write.parquet(path)
df.write.json(path)
However, Can I write into a parquet and a json in one time without compute the dataframe twice ?
By the way , I dont want to cache the data in memory, because it's too big.
If you don't cache/persist the dataframe, then it'll will need re-computed for each output format.
We can implement an org.apache.spark.sql.execution.datasources.FileFormat to do such thing.
DuplicateOutFormat demo
/**
* Very Dangerous Toy Code. DO NOT USE IN PRODUCTION.
*/
class DuplicateOutFormat
extends FileFormat
with DataSourceRegister
with Serializable {
override def inferSchema(sparkSession: SparkSession, options: Map[String, String], files: Seq[FileStatus]): Option[StructType] = {
throw new UnsupportedOperationException()
}
override def prepareWrite(sparkSession: SparkSession,
job: Job,
options: Map[String, String],
dataSchema: StructType): OutputWriterFactory = {
val format1 = options("format1")
val format2 = options("format2")
val format1Instance = DataSource.lookupDataSource(format1, sparkSession.sessionState.conf)
.newInstance().asInstanceOf[FileFormat]
val format2Instance = DataSource.lookupDataSource(format2, sparkSession.sessionState.conf)
.newInstance().asInstanceOf[FileFormat]
val writerFactory1 = format1Instance.prepareWrite(sparkSession, job, options, dataSchema)
val writerFactory2 = format2Instance.prepareWrite(sparkSession, job, options, dataSchema)
new OutputWriterFactory {
override def getFileExtension(context: TaskAttemptContext): String = ".dup"
override def newInstance(path: String, dataSchema: StructType, context: TaskAttemptContext): OutputWriter = {
val path1 = path.replace(".dup", writerFactory1.getFileExtension(context))
val path2 = path.replace(".dup", writerFactory2.getFileExtension(context))
val writer1 = writerFactory1.newInstance(path1, dataSchema, context)
val writer2 = writerFactory2.newInstance(path2, dataSchema, context)
new OutputWriter {
override def write(row: InternalRow): Unit = {
writer1.write(row)
writer2.write(row)
}
override def close(): Unit = {
writer1.close()
writer2.close()
}
}
}
}
}
override def shortName(): String = "dup"
}
SPI
we should make a SPI file /META-INF/services/org.apache.spark.sql.sources.DataSourceRegister, content:
com.github.sparkdemo.DuplicateOutFormat.
demo usage
class DuplicateOutFormatTest extends FunSuite {
val spark = SparkSession.builder()
.master("local")
.getOrCreate()
val sc = spark.sparkContext
import spark.implicits._
test("testDuplicateWrite") {
val data = Array(
("k1", "fa", "20210901", 16),
("k2", null, "20210902", 15),
("k3", "df", "20210903", 14),
("k4", null, "20210904", 13)
)
val tempDir = System.getProperty("java.io.tmpdir") + "spark-dup-test" + System.nanoTime()
val df = sc.parallelize(data).toDF("k", "col2", "day", "col4")
df.write
.option("format1", "csv")
.option("format2", "orc")
.format("dup").save(tempDir)
df.show(1000, false)
}
}
WARNING
Spark SQL couple some sth in DataFrameWriter#saveToV1Source and other source code, that we can't change. This custom DuplicateOutFormat is just for demo, lacking of test. Full demo in github.

Same value was passed as the nextKey in two sequential Pages loaded from a PagingSource in Paging Library 3 Android

I migrated from paging 2 to paging 3. I tried to implement ItemKeyedDataSource of Paging 2 to Paging library 3. But the problem I was facing is, the same value(currentJodId) was passed as the nextkey in two sequential Pages loaded. And After that app crashes. but if I add "keyReuseSupported = true" in DataSource, app does not crash. But it started calling same item id as the nextkey.
JobSliderRestApi.kt
#GET("job/list/slides")
fun getDetailOfSelectedJob(
#Query("current_job") currentJodId: Int?,
#Query("limit") jobLimit: Int?,
#Query("search_in") fetchType: String?
): Single<Response<JobViewResponse>>
JobViewResponse.kt
data class JobViewResponse(
#SerializedName("data") val data: ArrayList<JobDetail>?
) : BaseResponse()
JobDetail.kt
data class JobDetail(
#SerializedName("job_id") val jobId: Int,
#SerializedName("tuition_type") val jobType: String?,
#SerializedName("class_image") val jobImage: String,
#SerializedName("salary") val salary: String,
#SerializedName("no_of_student") val noOfStudent: Int,
#SerializedName("student_gender") val studentGender: String,
#SerializedName("tutor_gender") val preferredTutor: String,
#SerializedName("days_per_week") val daysPerWeek: String?,
#SerializedName("other_req") val otherReq: String?,
#SerializedName("latitude") val latitude: Double?,
#SerializedName("longitude") val longitude: Double?,
#SerializedName("area") val area: String,
#SerializedName("tutoring_time") val tutoringTime: String?,
#SerializedName("posted_date") val postedDate: String?,
#SerializedName("subjects") val subjects: String,
#SerializedName("title") val title: String
)
JodSliderDataSource.kt
class JodSliderDataSource #Inject constructor(
private val jobSliderRestApi: JobSliderRestApi
): RxPagingSource<Int, JobDetail>() {
// override val keyReuseSupported = true
#ExperimentalPagingApi
override fun getRefreshKey(state: PagingState<Int, JobDetail>): Int? {
return state.anchorPosition?.let {
state.closestItemToPosition(it)?.jobId
}
}
override fun loadSingle(params: LoadParams<Int>): Single<LoadResult<Int, JobDetail>> {
return jobSliderRestApi.getDetailOfSelectedJob(42673, 2, "next").toSingle()
.subscribeOn(Schedulers.io())
.map { jobResponse -> toLoadResult(jobResponse.data) }
.onErrorReturn { LoadResult.Error(it) }
}
private fun toLoadResult(data: ArrayList<JobDetail>): LoadResult<Int, JobDetail> {
return LoadResult.Page(data = data, prevKey = null, nextKey = data.lastOrNull()?.jobId)
}
}
I was getting the same error and this is what worked for me. In the JodSliderDataSource class, toLoadResult method, set the nextKey parameter value by getting the page number from the response data and adding one.
private fun toLoadResult(
data: ArrayList<JobDetail>
): LoadResult<Int, JobDetail> {
return LoadResult.Page(
data = data,
prevKey = null,
nextKey = data.lastOrNull()?.jobId + 1 // Add one to the page number here.
)
}

How To Search data from RecyclerView Retrofit Kotlin Android Studio?

How to search for data in RecyclerView?
I have model like this and I get data with retrofit. I have to make RecyclerView with model data like this and how to make Search from RecyclerView?
data class ListArticle(
#SerializedName("status") val status : String,
#SerializedName("totalResults") val totalResults : Int,
#SerializedName("articles") val articles : List<Articles>
)
data class Source (
#SerializedName("id") val id : String,
#SerializedName("name") val name : String
)
data class Articles (
#SerializedName("source") val source : Source,
#SerializedName("author") val author : String,
#SerializedName("title") val title : String,
#SerializedName("description") val description : String,
#SerializedName("url") val url : String,
#SerializedName("urlToImage") val urlToImage : String,
#SerializedName("publishedAt") val publishedAt : String,
#SerializedName("content") val content : String
)
Have to solve this question and I'm sorry for not completed question
just added this to your adapter
fun getFilter(): Filter? {
return articleFilter
}
private val articleFilter: Filter = object : Filter() {
override fun performFiltering(constraint: CharSequence?): FilterResults? {
val filteredList: MutableList<Articles> = ArrayList()
if (constraint == null || constraint.length == 0) {
filteredList.addAll(articles)
} else {
val filterPattern =
constraint.toString().toLowerCase().trim { it <= ' ' }
for (item in articles) {
if (item.title.toLowerCase().contains(filterPattern)) {
filteredList.add(item)
}
}
}
val results = FilterResults()
results.values = filteredList
return results
}
override fun publishResults(
constraint: CharSequence?,
results: FilterResults
) {
searchArticles = results.values as List<Articles>
notifyDataSetChanged()
}
}
fun setData(article: List<Articles>){
searchArticles = article
notifyDataSetChanged()
}

Spark Structured streaming kafka avro Producer

I have a dataframe let's say:
val someDF = Seq(
(8, "bat"),
(64, "mouse"),
(-27, "horse")
).toDF("number", "word")
I want to send that dataframe to a kafka topic using avro serialization and using schema registry. I believe I'm almost there, but I can't seem to get past the Task not serializable error. I understand there is a sink for kafka, but it doesn't communicate with the schema registry which is a requirement.
object Holder extends Serializable{
def prop(): java.util.Properties = {
val props = new Properties()
props.put("schema.registry.url", schemaRegistryURL)
props.put("key.serializer", classOf[KafkaAvroSerializer].getCanonicalName)
props.put("value.serializer", classOf[KafkaAvroSerializer].getCanonicalName)
props.put("schema.registry.url", schemaRegistryURL)
props.put("bootstrap.servers", brokers)
props
}
def vProps(props: java.util.Properties): kafka.utils.VerifiableProperties = {
val vProps = new kafka.utils.VerifiableProperties(props)
vProps
}
def messageSchema(vProps: kafka.utils.VerifiableProperties): org.apache.avro.Schema = {
val ser = new KafkaAvroEncoder(vProps)
val avro_schema = new RestService(schemaRegistryURL).getLatestVersion(subjectValueName)
val messageSchema = new Schema.Parser().parse(avro_schema.getSchema)
messageSchema
}
def avroRecord(messageSchema: org.apache.avro.Schema): org.apache.avro.generic.GenericData.Record = {
val avroRecord = new GenericData.Record(messageSchema)
avroRecord
}
def ProducerRecord(avroRecord:org.apache.avro.generic.GenericData.Record): org.apache.kafka.clients.producer.ProducerRecord[org.apache.avro.generic.GenericRecord,org.apache.avro.generic.GenericRecord] = {
val record = new ProducerRecord[GenericRecord, GenericRecord](topicWrite, avroRecord)
record
}
def producer(props: java.util.Properties): KafkaProducer[GenericRecord, GenericRecord] = {
val producer = new KafkaProducer[GenericRecord, GenericRecord](props)
producer
}
}
val prod: (String, String) => String = (
number: String,
word: String,
) => {
val prop = Holder.prop()
val vProps = Holder.vProps(prop)
val mSchema = Holder.messageSchema(vProps)
val aRecord = Holder.avroRecord(mSchema)
aRecord.put("number", number)
aRecord.put("word", word)
val record = Holder.ProducerRecord(aRecord)
val producer = Holder.producer(prop)
producer.send(record)
"sent"
}
val prodUDF: org.apache.spark.sql.expressions.UserDefinedFunction =
udf((
Number: String,
word: String,
) => prod(number,word))
val testDF = firstDF.withColumn("sent", prodUDF(col("number"), col("word")))
KafkaProducer is not serializable.
Create the KafkaProducer inside prod() instead of creating it outside.

Exception when using UDT in Spark DataFrame

I'm trying to create a user defined type in spark sql, but I receive:
com.ubs.ged.risk.stdout.spark.ExamplePointUDT cannot be cast to org.apache.spark.sql.types.StructType even when using their example. Has anyone made this work?
My code:
test("udt serialisation") {
val points = Seq(new ExamplePoint(1.3, 1.6), new ExamplePoint(1.3, 1.8))
val df = SparkContextForStdout.context.parallelize(points).toDF()
}
#SQLUserDefinedType(udt = classOf[ExamplePointUDT])
case class ExamplePoint(val x: Double, val y: Double)
/**
* User-defined type for [[ExamplePoint]].
*/
class ExamplePointUDT extends UserDefinedType[ExamplePoint] {
override def sqlType: DataType = ArrayType(DoubleType, false)
override def pyUDT: String = "pyspark.sql.tests.ExamplePointUDT"
override def serialize(obj: Any): Seq[Double] = {
obj match {
case p: ExamplePoint =>
Seq(p.x, p.y)
}
}
override def deserialize(datum: Any): ExamplePoint = {
datum match {
case values: Seq[_] =>
val xy = values.asInstanceOf[Seq[Double]]
assert(xy.length == 2)
new ExamplePoint(xy(0), xy(1))
case values: util.ArrayList[_] =>
val xy = values.asInstanceOf[util.ArrayList[Double]].asScala
new ExamplePoint(xy(0), xy(1))
}
}
override def userClass: Class[ExamplePoint] = classOf[ExamplePoint]
}
The usefull stackstrace is this:
com.ubs.ged.risk.stdout.spark.ExamplePointUDT cannot be cast to org.apache.spark.sql.types.StructType
java.lang.ClassCastException: com.ubs.ged.risk.stdout.spark.ExamplePointUDT cannot be cast to org.apache.spark.sql.types.StructType
at org.apache.spark.sql.SQLContext.createDataFrame(SQLContext.scala:316)
at org.apache.spark.sql.SQLContext$implicits$.rddToDataFrameHolder(SQLContext.scala:254)
It seems that the UDT needs to be used inside of another class to work (as the type of a field). One solution to use it directly is to wrap it into a Tuple1:
test("udt serialisation") {
val points = Seq(new Tuple1(new ExamplePoint(1.3, 1.6)), new Tuple1(new ExamplePoint(1.3, 1.8)))
val df = SparkContextForStdout.context.parallelize(points).toDF()
df.collect().foreach(println(_))
}

Resources