If I have a Dataframe with two Timestamps, called 'start' and 'end', how can I calculate a list of all the hour's between 'start' and 'end'?
Another say to say this might be "which hours was the record active"?
For example:
// Input
| start| end|
|2017-06-01 09:30:00|2017-06-01 11:30:00|
|2017-06-01 14:00:00|2017-06-01 14:30:00|
// Result
| start| end|hours_active|
|2017-06-01 09:30:00|2017-06-01 11:30:00| (9,10,11)|
|2017-06-01 14:00:00|2017-06-01 14:30:00| (14)|
Thanks
If the difference between the start and end is always less than 24 hours, you can use the following UDF. Assuming the type of the columns is Timestamp:
val getActiveHours = udf((s: Long, e: Long) => {
if (e >= s) {
val diff = e - s
(s to (s+diff)).toSeq
} else {
// the end is in the next day
(s to 24).toSeq ++ (1L to e).toSeq
}
})
df.withColumn("hours_active", getActiveHours(hour($"start"), hour($"end")))
Using the example data in the question gives:
+---------------------+---------------------+------------+
|start |end |hours_active|
+---------------------+---------------------+------------+
|2017-06-01 09:30:00.0|2017-06-01 11:30:00.0|[9, 10, 11] |
|2017-06-01 14:00:00.0|2017-06-01 14:30:00.0|[14] |
+---------------------+---------------------+------------+
Note: For larger differences between the timestamps the above code can be adjusted to take that into account. It would then be necessary to look at other fields in addition to the hour, e.g. day/month/year.
Related
I'm trying to write a Kusto query that needs to count how many intervals overlap for a certain date range. This is how my table looks like:
userID | interval1 | interval2
24 | 21.1.2012 10:40 | 21.1.2012 11:00
25 | 21.1.2012 9:55 | 21.1.2012 10:50
I would like to to consider the time range given by [min(interval1), max(interval2)] with 1s step and for each instance of this interval I would like to know how many intervals from the previous table overlap. For example, for 21.1.2012 10:00 there is only one interval but for 10:45 there are two intervals overlapping.
Thank you
Every interval1 indicates additional user's session start (+1).
Every interval2 indicates additional user's session end (-1).
The accumulated sum indicates the number of active sessions.
Solution 1 (Rendering level)
with (accumulate=True)
let t = (datatable (userID:int,interval1:datetime,interval2:datetime)
[
24 ,datetime(2012-01-21 10:40) ,datetime(2012-01-21 11:00)
,25 ,datetime(2012-01-21 09:55) ,datetime(2012-01-21 10:50)
]);
let from_dttm = datetime(2012-01-21 09:30);
let to_dttm = datetime(2012-01-21 11:30);
let sessions_starts = (t | project delta = 1, dttm = interval1);
let sessions_ends = (t | project delta = -1, dttm = interval2);
union sessions_starts, sessions_ends
| make-series delta = sum(delta) on dttm from from_dttm to to_dttm step 1s
| render timechart with (accumulate=True)
Fiddle
Solution 2 (Data level)
mv-apply + row_cumsum
let t = (datatable (userID:int,interval1:datetime,interval2:datetime)
[
24 ,datetime(2012-01-21 10:40) ,datetime(2012-01-21 11:00)
,25 ,datetime(2012-01-21 09:55) ,datetime(2012-01-21 10:50)
]);
let from_dttm = datetime(2012-01-21 09:30);
let to_dttm = datetime(2012-01-21 11:30);
let sessions_starts = (t | project delta = 1, dttm = interval1);
let sessions_ends = (t | project delta = -1, dttm = interval2);
union sessions_starts, sessions_ends
| make-series delta = sum(delta) on dttm from from_dttm to to_dttm step 1s
| mv-apply delta to typeof(long), dttm to typeof(datetime) on (project active_users = row_cumsum(delta), dttm)
| render timechart with (xcolumn=dttm, ycolumns=active_users)
Fiddle
Take a look at this sample from the Kusto docs:
https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/samples?pivots=azuredataexplorer#chart-concurrent-sessions-over-time
X
| mv-expand samples = range(bin(interval1, 1m), interval2, 1m)
| summarize count_userID = count() by bin(todatetime(samples), 1m)
I am not able correctly divide time-series data with another time-series.
I get data from my TestTablewhich results in the following view:
TagId, sdata
8862, [0,0,0,0,2,2,2,3,4]
6304, [0,0,0,0,2,2,2,3,2]
I want to divide the sdata series for tagId 8862 with the series from 6304
I expect the following result:
[NaN,NaN,NaN,NaN,1,1,1,1,2]
When I try the below code, I only get two empty ddata rows in my S2 results
TestTable
| where TagId in (8862,6304)
| make-series sdata = avg(todouble(Value)) default=0 on TimeStamp in range (datetime(2019-06-27), datetime(2019-06-29), 1m) by TagId
| as S1;
S1 | project ddata = series_divide(sdata[0].['sdata'], sdata[1].['sdata'])
| as S2
What am I doing wrong?
both arguments to series_divide() can't come from two separate rows in the dataset.
here's an example for how you could achieve that (based on the limited-and-perhaps-not-fully-representative-of-your-real use case, as shown in your question)
let T =
datatable(tag_id:long, sdata:dynamic)
[
8862, dynamic([0,0,0,0,2,2,2,3,4]),
6304, dynamic([0,0,0,0,2,2,2,3,2]),
]
;
let get_value_from_T = (_tag_id:long)
{
toscalar(
T
| where tag_id == _tag_id
| take 1
| project sdata
)
};
print sdata_1 = get_value_from_T(8862), sdata_2 = get_value_from_T(6304)
| extend result = series_divide(sdata_1, sdata_2)
which returns:
|sdata_1 | sdata_2 | result |
|--------------------|---------------------|---------------------------------------------|
|[0,0,0,0,2,2,2,3,4] | [0,0,0,0,2,2,2,3,2] |["NaN","NaN","NaN","NaN",1.0,1.0,1.0,1.0,2.0]|
I would like to use Spark to parse network messages and group them into logical entities in a stateful manner.
Problem Description
Let's assume each message is in one row of an input dataframe, depicted below.
| row | time | raw payload |
+-------+------+---------------+
| 1 | 10 | TEXT1; |
| 2 | 20 | TEXT2;TEXT3; |
| 3 | 30 | LONG- |
| 4 | 40 | TEXT1; |
| 5 | 50 | TEXT4;TEXT5;L |
| 6 | 60 | ONG |
| 7 | 70 | -TEX |
| 8 | 80 | T2; |
The task is to parse the logical messages in the raw payload, and provide them in a new output dataframe. In the example each logical message in the payload ends with a semicolon (delimiter).
The desired output dataframe could then look as follows:
| row | time | message |
+-------+------+---------------+
| 1 | 10 | TEXT1; |
| 2 | 20 | TEXT2; |
| 3 | 20 | TEXT3; |
| 4 | 30 | LONG-TEXT1; |
| 5 | 50 | TEXT4; |
| 6 | 50 | TEXT5; |
| 7 | 50 | LONG-TEXT2; |
Note that some messages rows do not yield a new row in the result (e.g. rows 4, 6,7,8), and some yield even multiple rows (e.g. rows 2, 5)
My questions:
is this a use case for UDAF? If so, how for example should i implement the merge function? i have no idea what its purpose is.
since the message ordering matters (i cannot process LONGTEXT-1, LONGTEXT-2 properly without respecting the message order), can i tell spark to parallelize perhaps on a higer level (e.g. per calendar day of messages) but not parallelize within a day (e.g. events at time 50,60,70,80 need to be processed in order).
follow up question: is it conceivable that the solution will be usable not just in traditional spark, but also in spark structured streaming? Or does the latter require its own kind of stateful processing method?
Generally, you can run arbitrary stateful aggregations on spark streaming by using mapGroupsWithState of flatMapGroupsWithState. You can find some examples here. None of those though will guarantee that the processing of the stream will be ordered by event time.
If you need to enforce data ordering, you should try to use window operations on event time. In that case, you need to run stateless operations instead, but if the number of elements in each window group is small enough, you can use collectList for instance and then apply a UDF (where you can manage the state for each window group) on each list.
ok i figured it out in the meantime how to do this with an UDAF.
class TagParser extends UserDefinedAggregateFunction {
override def inputSchema: StructType = StructType(StructField("value", StringType) :: Nil)
override def bufferSchema: StructType = StructType(
StructField("parsed", ArrayType(StringType)) ::
StructField("rest", StringType)
:: Nil)
override def dataType: DataType = ArrayType(StringType)
override def deterministic: Boolean = true
override def initialize(buffer: MutableAggregationBuffer): Unit = {
buffer(0) = IndexedSeq[String]()
buffer(1) = null
}
def doParse(str: String, buffer: MutableAggregationBuffer): Unit = {
buffer(0) = IndexedSeq[String]()
val prevRest = buffer(1)
var idx = -1
val strToParse = if (prevRest != null) prevRest + str else str
do {
val oldIdx = idx;
idx = strToParse.indexOf(';', oldIdx + 1)
if (idx == -1) {
buffer(1) = strToParse.substring(oldIdx + 1)
} else {
val newlyParsed = strToParse.substring(oldIdx + 1, idx)
buffer(0) = buffer(0).asInstanceOf[IndexedSeq[String]] :+ newlyParsed
buffer(1) = null
}
} while (idx != -1)
}
override def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
if (buffer == null) {
return
}
doParse(input.getAs[String](0), buffer)
}
override def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = throw new UnsupportedOperationException
override def evaluate(buffer: Row): Any = buffer(0)
}
Here a demo app the uses the above UDAF to solve the problem from above:
case class Packet(time: Int, payload: String)
object TagParserApp extends App {
val spark, sc = ... // kept out for brevity
val df = sc.parallelize(List(
Packet(10, "TEXT1;"),
Packet(20, "TEXT2;TEXT3;"),
Packet(30, "LONG-"),
Packet(40, "TEXT1;"),
Packet(50, "TEXT4;TEXT5;L"),
Packet(60, "ONG"),
Packet(70, "-TEX"),
Packet(80, "T2;")
)).toDF()
val tp = new TagParser
val window = Window.rowsBetween(Window.unboundedPreceding, Window.currentRow)
val df2 = df.withColumn("msg", tp.apply(df.col("payload")).over(window))
df2.show()
}
this yields:
+----+-------------+--------------+
|time| payload| msg|
+----+-------------+--------------+
| 10| TEXT1;| [TEXT1]|
| 20| TEXT2;TEXT3;|[TEXT2, TEXT3]|
| 30| LONG-| []|
| 40| TEXT1;| [LONG-TEXT1]|
| 50|TEXT4;TEXT5;L|[TEXT4, TEXT5]|
| 60| ONG| []|
| 70| -TEX| []|
| 80| T2;| [LONG-TEXT2]|
+----+-------------+--------------+
the main issue for me was to figure out how to actually apply this UDAF, namely using this:
df.withColumn("msg", tp.apply(df.col("payload")).over(window))
the only thing i need now to figure out are the aspects of parallelization (which i only want to happen where we do not rely on ordering) but that's a separate issue for me.
I need to calculate additional features from a dataset using multiple lead's and lag's. The high number of lead's and lag's causes a out-of-memory error.
Data frame:
|----------+----------------+---------+---------+-----+---------|
| DeviceID | Timestamp | Sensor1 | Sensor2 | ... | Sensor9 |
|----------+----------------+---------+---------+-----+---------|
| | | | | | |
| Long | Unix timestamp | Double | Double | | Double |
| | | | | | |
|----------+----------------+---------+---------+-----+---------|
Window definition:
// Each window contains about 600 rows
val w = Window.partitionBy("DeviceID").orderBy("Timestamp")
Compute extra features:
var res = df
val sensors = (1 to 9).map(i => s"Sensor$i")
for (i <- 1 to 5) {
for (s <- sensors) {
res = res.withColumn(lag(s, i).over(w))
.withColumn(lead(s, i)).over(w)
}
// Compute features from all the lag's and lead's
[...]
}
System info:
RAM: 16G
JVM heap: 11G
The code gives correct results with small datasets, but gives an out-of-memory error with 10GB of input data.
I think the culprit is the high number of window functions because the DAG shows a very long sequence of
Window -> WholeStageCodeGen -> Window -> WholeStageCodeGen ...
Is there anyway to calculate the same features in a more efficient way?
For example, is it possible to get lag(Sensor1, 1), lag(Sensor2, 1), ..., lag(Sensor9, 1) without calling lag(..., 1) nine times?
If the answer to the previous question is no, then how can I avoid out-of-memory? I have already tried increasing the number of partitions.
You could try something like
res = res.select('*', lag(s"Sensor$1", 1).over(w), lag(s"Sensor$1", 2).over(w), ...)
That is, to write everything in a select instead of many withColumn
Then there will be only 1 Window in the plan. Maybe it helps with the performance.
I am developing a Spark Streaming application where I want to have one global numeric ID per item in my data stream. Having an interval/RDD-local ID is trivial:
dstream.transform(_.zipWithIndex).map(_.swap)
This will result in a DStream like:
// key: 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 || 0 | 1 | 2 | 3 | 4 || 0
// val: a | b | c | d | e | f | g | h | i || j | k | l | m | n || o
(where the double bar || indicates the beginning of a new RDD).
What I finally want to have is:
// key: 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 || 9 | 10 | 11 | 12 | 13 || 14
// val: a | b | c | d | e | f | g | h | i || j | k | l | m | n || o
How can I do that in a safe and performant way?
This seems like a trivial task, but I feel it very hard to preserve state (state = "number of items seen so far") between RDDs. Here are two approaches I tried, updating the number of seen so far (plus the number in the current interval) using updateStateByKey with a bogus key:
val intervalItemCounts = inputStream.count().map((1, _))
// intervalItemCounts looks like:
// K: 1 || 1 || 1
// V: 9 || 5 || 1
val updateCountState: (Seq[Long], Option[ItemCount]) => Option[ItemCount] =
(itemCounts, maybePreviousState) => {
val previousState = maybePreviousState.getOrElse((0L, 0L))
val previousItemCount = previousState._2
Some((previousItemCount, previousItemCount + itemCounts.head))
}
val totalNumSeenItems: DStream[ItemCount] = intervalItemCounts.
updateStateByKey(updateCountState).map(_._2)
// totalNumSeenItems looks like:
// V: (0,9) || (9,14) || (14,15)
// The first approach uses a cartesian product with the
// 1-element state DStream. (Is this performant?)
val increaseRDDIndex1: (RDD[(Long, Char)], RDD[ItemCount]) =>
RDD[(Long, Char)] =
(streamData, totalCount) => {
val product = streamData.cartesian(totalCount)
product.map(dataAndOffset => {
val ((localIndex: Long, data: Char),
(offset: Long, _)) = dataAndOffset
(localIndex + offset, data)
})
}
val globallyIndexedItems1: DStream[(Long, Char)] = inputStream.
transformWith(totalNumSeenItems, increaseRDDIndex1)
// The second approach uses a take() output operation on the
// 1-element state DStream beforehand. (Is this valid?? Will
// the closure be serialized and shipped in every interval?)
val increaseRDDIndex2: (RDD[(Long, Char)], RDD[ItemCount]) =>
RDD[(Long, Char)] = (streamData, totalCount) => {
val offset = totalCount.take(1).head._1
streamData.map(keyValue => (keyValue._1 + offset, keyValue._2))
}
val globallyIndexedItems2: DStream[(Long, Char)] = inputStream.
transformWith(totalNumSeenItems, increaseRDDIndex2)
Both approaches give the correct result (with local[*] master), but I am wondering about performance (shuffle etc.), whether it works in a truly distributed environment and whether it shouldn't be a lot easier than that...