Countdown animation for LocalTime - android-studio

val dateNow = LocalDateTime.now()
val formatter: DateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss.SSSSSS")
val dateTime = LocalDateTime.parse(it?.LastUpdated, formatter)
println("The date is :" + dateTime)
val duration = Duration.between(dateTime, dateNow).toMinutes()
val countDown = LocalTime.MIN.plus(
Duration.ofMinutes( duration )
).toString()
So I'm pulling LastUpdated using Retrofit then, I'm getting the difference between the two dates and then displaying it in minutes (HH:SS) format.
Let's say, for example, the output of countDown is 24:45.
How can one make that text count down with animation?

Related

Using reduceByKey calculate the positivity by month for New York State

Using reduceByKey calculate the positivity by month for New York State. Then use Math.round, sortBy, and collect to display your results as a percentage with 2 decimal places. A sample of the expected result (the percentage for 10/2022 will change depending on when you pull the data):
(2020-03,37.89)
(2020-04,31.79)
(2020-05,5.34)
(2020-06,1.2)
(2020-07,1.08)
(2020-08,0.83)
(2020-09,0.97)
(2020-10,1.3)
(2020-11,2.98)
(2020-12,5.65)
(2021-01,6.27)
(2021-02,3.66)
(2021-03,3.35)
(2021-04,2.69)
(2021-05,1.09)
(2021-06,0.42)
(2021-07,1.56)
(2021-08,3.15)
(2021-09,2.9)
(2021-10,2.31)
(2021-11,3.55)
(2021-12,10.54)
(2022-01,15.09)
(2022-02,2.93)
(2022-03,1.95)
(2022-04,5.45)
(2022-05,7.87)
(2022-06,5.88)
(2022-07,9.54)
(2022-08,7.08)
(2022-09,6.94)
(2022-10,6.71)`
Use this info below:
val counties = Array("Kings","Queens","New+York","Suffolk","Bronx","Nassau","Westchester","Erie",
"Monroe","Richmond","Onondaga","Orange","Rockland","Albany","Dutchess",
"Saratoga","Oneida","Niagara","Broome","Ulster","Rensselaer","Schenectady",
"Chautauqua","Oswego","Jefferson","Ontario","St.+Lawrence","Tompkins",
"Putnam","Steuben","Wayne","Chemung","Sullivan","Clinton","Cattaraugus",
"Cayuga","Madison","Warren","Columbia","Livingston","Washington","Herkimer",
"Otsego","Genesee","Fulton","Montgomery","Greene","Tioga","Franklin","Chenango",
"Cortland","Allegany","Delaware","Wyoming","Orleans"
,"Essex","Seneca","Schoharie","Lewis","Yates","Schuyler","Hamilton")
val base_url = "https://health.data.ny.gov/resource/xdss-u53e.json?County="
val urls = counties.map(a => base_url+a) //Makes a url for each county
//This gets the contents of the url
//results is an array with one entry per county
//the data for each county is in JSON format
val results = urls.map(u => scala.io.Source.fromURL(u).mkString)
//Create an rdd in 32 partitions (there is a lot of data)
//Reads the json and converts it into a spark dataframe (we'll do this later)
//and then converts the df into an rdd
//finally, extract the date, the county name, the new cases on that date and tests done
//on that date
val data_rdd = spark.read.json(sc.parallelize(results,32))
.rdd
.map(r => (r(5).toString.slice(0,10),
r(0).toString,
r(4).toString.toInt,
r(7).toString.toInt))

Kotlin string date formatter

I'm trying to parse string date
"AuthDate": "2021-08-19T23:40:52+04:00",
here is code for parsing and displaying
var date = item?.authDate.toString()
val formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssz")
val parsedDate = formatter.parse(date)
val displayFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd, HH:MM:SS")
text = displayFormatter.format(parsedDate).toString()
This works fine, except one thing, Seconds is always displayed in "00".
For example, if authDate is 2021-08-19T23:40:52+04:00,
displayed authDate is 2021-08-19, 23:40:00
not 23:40:52 as I want.
val formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssz")
...
val displayFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd, HH:MM:SS")
Notice how the first of these uses mm and ss, while the second uses MM and SS. The former says to parse hours, minutes, then seconds. The latter says to display hours, the month, and then the fraction of a second. See the documentation for a full list of the specifiers, but you're probably looking for
val displayFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd, HH:mm:ss")

Pyspark - Applying custom function on structured streaming

I have 4 columns ['clienttimestamp",'sensor_id','actvivity',"incidents"]. From kafka stream, i consume data,preprocess and aggregate in window.
If i do with groupby with ".count()", The stream works very well writing each window with their count in the console.
This works,
df = df.withWatermark("clientTimestamp", "1 minutes")\
.groupby(window(df.clientTimestamp, "1 minutes", "1 minutes"), col('sensor_type')).count()
query = df.writeStream.outputMode("append").format('console').start()
query.awaitTermination()
But the real motive is to find the total time for which critical activity was live.
i.e. For each sensor_type, i group the data by window and i get the list of critical activity and find the total time for which the all critical activity lasted" (The code is below). But am not sure if i am using the udf in right way! Because below method does not work! Can anyone provide an example of applying a custom function for each group of window and to write the output to console.
This does not work
#f.pandas_udf(schemahh, f.PandasUDFType.GROUPED_MAP)
def calculate_time(pdf):
pdf = pdf.reset_index(drop=True)
total_time = 0
index_list = pdf.index[pdf['activity'] == 'critical'].to_list()
for ind in index_list:
start = pdf.loc[ind]['clientTimestamp']
end = pdf.loc[ind + 1]['clientTimestamp']
diff = start - end
time_n_mins = round(diff.seconds / 60, 2)
total_time = total_time + time_n_mins
largest_session_time = total_time
new_pdf = pd.DataFrame(columns=['sensor_type', 'largest_session_time'])
new_pdf.loc[0] = [pdf.loc[0]['sensor_type'], largest_session_time]
return new_pdf
df = df.withWatermark("clientTimestamp", "1 minutes")\
.groupby(window(df.clientTimestamp, "1 minutes", "1 minutes"), col('sensor_type'), col('activity')).apply(calculate_time)
query = df.writeStream.outputMode("append").format('console').start()
query.awaitTermination()

How To use Pysark Window Function on new unprocessed data?

I have developed window functions on pyspark DataFrame to calculate Total Transaction Amount made by customer on monthly basis per transaction.
For Eg:
Input Table has data:
And the window function process the data and inserts it into table
Now, if i get new transactions today, I want to develop a code where it loads the last month transaction into spark dataframe and running window function on new rows and saving it into Processed Table. The current window function will process all the rows and then need to manually avoid already inserted records and insert only new records. This will use high resources and high memory, when the window function becomes for a year.
#Function to apply window function
def cumulative_total_CR(df, from_column, to_column, window_function):
intermediate_column = from_column + "_temp"
df = df.withColumn(from_column,df[from_column].cast("double"))
df = df.withColumn(intermediate_column,when(col("Flow") == 'C',df[from_column]).otherwise(0))
df = df.withColumn(to_column, F.sum(intermediate_column).over(window_function))
return df
def cumulative_total_DR(df, from_column, to_column, window_function):
intermediate_column = from_column + "_temp"
df = df.withColumn(from_column,df[from_column].cast("double"))
df = df.withColumn(intermediate_column,when(col("Flow") == 'D',df[from_column]).otherwise(0))
df = df.withColumn(to_column, F.sum(intermediate_column).over(window_function))
return df
#Window Function:
window = (Window.partitionBy("CUSNO").orderBy(F.col(TxnDateTime).cast('long')).rangeBetween(-30,0))
df = load.data.from.hive
#appending TxnDate and TxnTime into new column TxnDateTime with type casting as timestamp and format as 'yyyy-MM-dd HH:mm:ss.SSS'
df = cumulative_total_CR(df, "TXNAMT", "Total_Cr_Monthly_Amt", window_function_30_days)
df = cumulative_total_DR(df, "TXNAMT", "Total_Dr_Monthly_Amt", window_function_30_days)
df = saving.data.to.disk for new records

Compare Two dataframes add mis matched values as a new column in Spark

Difference between two records is:
df1.except(df2)
Its getting results like this
How to compare two dataframes and what changes, and where & which column have changes, add this value as a column. Expected output like this
Join the two dataframe on the primary key, later using a with column and UDF pass the both column values(old and new values), in UDF compare the data and return the value if not same.
val check = udf ( (old_val:String,new_val:String) => if (old_val == new_val) new_val else "")
df_check= df
.withColumn("Check_Name",check(df.col("name"),df.col("new_name")))
.withColumn("Check_Namelast",check(df.col("lastname"),df.col("new_lastname")))
Or Def function
def fn(old_df:Dataframe,new_df:Dataframe) : Dataframe =
{
val old_df_array = old_df.collect() //make df to array to loop thru
val new_df_array = new_df.collect() //make df to array to loop thru
var value_change : Array[String] = ""
val count = old_df.count
val row_count = old_df.coloumn
val row_c = row.length
val coloumn_name = old_df.coloumn
for (i to count ) //loop thru all rows
{
var old = old_df_array.Map(x => x.split(","))
var new = new_df_array.Map(x => x.split(","))
for (j to row_c ) //loop thru all coloumn
{
if( old(j) != new(j) )
{
value_change = value_change + coloumn_name(j) " has value changed" ///this will add all changes in one full row
}
//append to array
append j(0) //primary key
append value_change //Remarks coloumn
}
}
//convert array to df
}

Resources