Azure Stream Analytics: Multiple Windows + JOINS - azure

My architecture:
1 EventHub with 8 Partitions & 2 TPUs
1 Streaming Analytics Job
6 Windows based on the same input (from 1mn to 6mn)
Sample Data:
{side: 'BUY', ticker: 'MSFT', qty: 1, price: 123, tradeTimestamp: 10000000000}
{side: 'SELL', ticker: 'MSFT', qty: 1, price: 124, tradeTimestamp:1000000000}
The EventHub PartitionKey is ticker
I would like to emit every second, the following data:
(Total quantity bought / Total quantity sold) in the last minute, last 2mn, last 3mn and more
What I tried:
WITH TradesWindow AS (
SELECT
windowEnd = System.Timestamp,
ticker,
side,
totalQty = SUM(qty)
FROM [Trades-Stream] TIMESTAMP BY tradeTimestamp PARTITION BY PartitionId
GROUP BY ticker, side, PartitionId, HoppingWindow(second, 60, 1)
),
TradesRatio1MN AS (
SELECT
ticker = b.ticker,
buySellRatio = b.totalQty / s.totalQty
FROM TradesWindow b /* SHOULD I PARTITION HERE TOO ? */
JOIN TradesWindow s /* SHOULD I PARTITION HERE TOO ? */
ON s.ticker = b.ticker AND s.side = 'SELL'
AND DATEDIFF(second, b, s) BETWEEN 0 AND 1
WHERE b.side = 'BUY'
)
/* .... More windows.... */
/* FINAL OUTPUT: Joining all the windows */
SELECT
buySellRatio1MN = bs1.buySellRatio,
buySellRatio2MN = bs2.buySellRatio
/* more windows */
INTO [output]
FROM buySellRatio1MN bs1 /* SHOULD I PARTITION HERE TOO ? */
JOIN buySellRatio2MN bs2 /* SHOULD I PARTITION HERE TOO ? */
ON bs2.ticker = bs1.ticker
AND DATEDIFF(second, bs1, bs2) BETWEEN 0 AND 1
Issues:
This requires 6 EventHub Consumer groups (each one can only have 5 readers), why ? I don't have 5x6 SELECT statements on the input, why then ?
The output doesn't seem consistent (I don't know if my JOINs are correct).
Sometimes the job doesn't output at all (maybe some partitioning problem ? see the comments in the code about partitioning)
Briefly, is there a better way to achieve this ? I couldn't find anything in the doc and examples about having multiple windows and joining them then joining the results of the previous joins from only 1 input.

For the first question, this depend of the internal implementation of the scale out logic. See details here.
For the output of the join, I don't see the whole query but if you join a query with a 1 minute window with a query with a 2 minute window with a 1s time "buffer" you will only an output every 2 minutes. UNION operator will be better for this.
From your sample and your goal, I think there is a much easier way to write this query using UDA (User Defined Aggregate).
For this I will define a UDA function called "ratio" first:
function main() {
this.init = function () {
this.sumSell = 0.0;
this.sumBuy = 0.0;
}
this.accumulate = function (value, timestamp) {
if (value.side=="BUY") {this.sumBuy+=value.qty};
if (value.side=="SELL") {this.sumSell+=value.qty};
}
this.computeResult = function () {
if(this.sumSell== 0) {
result = 0;
}
else {
result = this.sumBuy/this.sumSell;
}
return result;
}
}
Then I can simply use this SQL query for a 60 seconds window:
SELECT
windowEnd = System.Timestamp,
ticker,
uda.ratio(iothub) as ratio
FROM iothub PARTITION BY PartitionId
GROUP BY ticker, PartitionId, SlidingWindow(second, 60)

Related

How would you limit the number of records to process per grouped key in spark? (for skewed data)

I have two large datasets. There are multiple groupings of the same ids. Each group has a score. I'm trying to broadcast the score to each id in each group. But I have a nice constraint that I don't care about groups with more than 1000 ids.
Unfortunately, Spark keeps reading the full grouping. I can't seem to figure out a way to push down the limit so that Spark only reads up to 1000 records, and if there are any more gives up.
So far I've tried this:
def run: Unit = {
// ...
val scores: RDD[(GroupId, Score)] = readScores(...)
val data: RDD[(GroupId, Id)] = readData(...)
val idToScore: RDD[(Id, Score)] = scores.cogroup(data)
.flatMap(maxIdsPerGroupFilter(1000))
// ...
}
def maxIdsPerGroupFilter(maxIds: Int)(t: (GroupId, (Iterable[Score], Iterable[Id]))): Iterator[(Id, Score)] = {
t match {
case (groupId: GroupId, (scores: Iterable[Score], ids: Iterable[Id])) =>
if (!scores.iterator.hasNext) {
return Iterator.empty
}
val score: Score = scores.iterator.next()
val iter = ids.iterator
val uniqueIds: mutable.HashSet[Id] = new mutable.HashSet[Id]
while (iter.hasNext) {
uniqueIds.add(iter.next())
if (uniqueIds.size > maxIds) {
return Iterator.empty
}
}
uniqueIds.map((_, score)).iterator
}
}
(Even with variants where the filter function just returns empty iterators, Spark still is insistent on reading all the data)
The side effect of this is that because some groups have too many ids, I have a lot of skew in the data and the job can never finish when processing the full scale of data.
I want the reduce-side to only read in the data it needs, and not crap out because of data skew.
I have a feeling that somehow I need to create a transform that is able to push down a limit or take clause, but I can't figure out how.
Can't we just filter out those groups which have records more than 1k using count() in grouped data?
or if you want to have those groups also which have more than 1k records but only to pick upto 1k records then in spark sql query you can use ROW_NUMBER() OVER (PARTITION BY id ORDER BY someColumn DESC) AS rn and then put condition rn<=1000.

AutoQuery/OrmLite incorrect total value when using joins

I have this autoquery implementation
var q = AutoQuery.CreateQuery(request, base.Request).SelectDistinct();
var results = Db.Select<ProductDto>(q);
return new QueryResponse<ProductDto>
{
Offset = q.Offset.GetValueOrDefault(0),
Total = (int)Db.Count(q),
Results = results
};
The request has some joins:
public class ProductSearchRequest : QueryDb<GardnerRecord, ProductDto>
, ILeftJoin<GardnerRecord, RecordToBicCode>, ILeftJoin<RecordToBicCode, GardnerBicCode>
{
}
The records gets returned correctly but the total is wrong. I can see 40,000 records in database but it tells me there is 90,000. There is multiple RecordToBicCode for each GardnerRecord so it's giving me the number of records multiplied by the number of RecordToBicCode.
How do I match the total to the number of GardnerRecord matching the query?
I am using PostgreSQL so need the count statement to be like
select count(distinct r.id) from gardner_record r etc...
Dores OrmLite have a way to do this?
I tried:
var q2 = q;
q2.SelectExpression = "select count(distinct \"gardner_record\".\"id\")";
q2.OrderByExpression = null;
var count = Db.Select<int>(q2);
But I get object reference not set error.
AutoQuery is returning the correct total count for your query of which has left joins so will naturally return more results then the original source table.
You can perform a Distinct count with:
Total = Db.Scalar<long>(q.Select(x => Sql.CountDistinct(x.Id));

HazelcastJet rolling-aggregation with removing previous data and add new

We have use case where we are receiving message from kafka that needs to be aggregated. This has to be aggregated in a way that if an updates comes on same id then existing value if any needs to be subtracted and the new value has to be added.
From various forum i got to know that jet doesnt store raw values rather aggregated result and some internal data.
In such case how can i achieve this?
Example
Balance 1 {id:1, amount:100} // aggregated result 100
Balance 2 {id:2, amount:200} // 300
Balance 3 {id:1, amount:400} // 600 after removing 100 and adding 400
I could achieve a simple use where every time add. But i was not able to achieve the aggregation where existing value needs to be subtracted and new value has to be added.
rollingAggregation(AggregatorOperations.summingDouble(<login to add remove>))
.drainTo(Sinks.logger()).
Balance 1,2,3 are sequnce of messages
The comment shows whats the aggregated value at each message performed by jet.
My aim is to add new amount (if id comes for the first time) and subtract amount if an updated balance comes i. e. Id is same as earlier.
You can try a custom aggregate operation which will emit the previous and currently seen values like this:
public static <T> AggregateOperation1<T, ?, Tuple2<T, T>> previousAndCurrent() {
return AggregateOperation
.withCreate(() -> new Object[2])
.<T>andAccumulate((acc, current) -> {
acc[0] = acc[1];
acc[1] = current;
})
.andExportFinish((acc) -> tuple2((T) acc[0], (T) acc[1]));
}
The output should be a Tuple of the form (previous, current). Then you can apply rolling aggregate again to the output. To simplify the problem as input I have a pair of (id, amount) pairs.
Pipeline p = Pipeline.create();
p.drawFrom(Sources.<Integer, Long>mapJournal("map", START_FROM_OLDEST)) // (id, amount)
.groupingKey(Entry::getKey)
.rollingAggregate(previousAndCurrent(), (key, val) -> val)
.rollingAggregate(AggregateOperations.summingLong(e -> {
long prevValue = e.f0() == null ? 0 : e.f0().getValue();
long newValue = e.f1().getValue();
return newValue - prevValue;
}))
.drainTo(Sinks.logger());
JetConfig config = new JetConfig();
config.getHazelcastConfig().addEventJournalConfig(new EventJournalConfig().setMapName("map"));
JetInstance jet = Jet.newJetInstance(config);
IMapJet<Object, Object> map = jet.getMap("map");
map.put(0, 1L);
map.put(0, 2L);
map.put(1, 10L);
map.put(1, 40L);
jet.newJob(p).join();
This should produce as output: 1, 2, 12, 42.

Azure stream analytics array_agg equivalent?

Is there a way to do the postgres equivalent of array_agg or string_agg in stream analytics? I have data that comes in every few seconds, and would like to get the count of the values within a time frame.
Data:
{time:12:01:01,name:A,location:X,value:10}
{time:12:01:01,name:B,location:X,value:9}
{time:12:01:02,name:C,location:Y,value:5}
{time:12:01:02,name:B,location:Y,value:4}
{time:12:01:03,name:B,location:Z,value:2}
{time:12:01:03,name:A,location:Z,value:3}
{time:12:01:06,name:B,location:Z,value:4}
{time:12:01:06,name:C,location:Z,value:7}
{time:12:01:08,name:B,location:Y,value:1}
{time:12:01:13,name:B,location:X,value:8}
With a sliding window of 2 seconds, I want to group the data to see the following:
12:01:01, 2 events, 9.5 avg, 2 distinct names, 1 distinct location, nameA:1, nameB:1, locationX:1
12:01:02, 4 events, 7 avg, 3 distinct names, 2 distinct location, nameA:1, nameB:2,nameC:1,locationX:1,locationY:1
12:01:03...
12:01:06...
...
I can get the number of events, average, and distinct counts without issue. I use a window as well as a with statement to join on the timestamp to get the aggregated counts for that timestamp. I am having trouble figuring out how to get the total counts by name and location, mostly because I do not know how to aggregate strings in Azure.
with agg1 as (
select system.timestamp as start,
avg(value) as avg,
count(1) as events,
count(distinct name) as distinct names,
count(distinct location) as distinct location
from input timestamp by created
group by slidingwindow(second,2)
),
agg2 as (
select agg2_inner.start,
array_agg(name,'|',ct_name) as countbyname (????)
from (
select system.timestamp as start,
name, count(1) as ct_name
from input timestamp by created
group by slidingwindow(second,2), name
) as agg2_inner
group by agg2_inner.start, slidingwindow(seconds,2)
)
select * from agg1 join agg2 on (datediff(second,agg1,agg2) between 0 and 2
and agg1.start = agg2.start)
There is not set list of names, locations so the query needs to be a bit dynamic. It is ok if the counts are in an object within a single query, a process later on can parse to get individual counts.
As far as I know, azure stream analysis doesn't provide the array_agg method. But it provides Collect method which could return the all record values from the window.
I suggest you could use Collect method firstly return the array which grouped by the time and window.
Then you could use Azure Stream Analytics JavaScript user-defined functions to write your own logic to convert the array to the result.
More details, you could refer to below sample:
The query like this:
SELECT
time, udf.yourunfname(COLLECT()) as Result
INTO
[YourOutputAlias]
FROM
[YourInputAlias]
Group by time, TumblingWindow(minute, 10)
The UDF is like this:
I just return the avg and the event length.
function main(InputJSON) {
var sum = 0;
for (i = 0; i < InputJSON.length; i++) {
sum += InputJSON[i].value;
}
var result = {events:InputJSON.length,avg:sum/InputJSON.length };
return result;
}
Data:
{"name": "A", "time":"12:01:01","value":10}
{"name": "B", "time":"12:01:01","value":9}
{"name": "C", "time":"12:01:02","value":10}
Result:

Spark - convert string IDs to unique integer IDs

I have a dataset which looks like this, where each user and product ID is a string:
userA, productX
userA, productX
userB, productY
with ~2.8 million products and 300 million users; about 2.1 billion user-product associations.
My end goal is to run Spark collaborative filtering (ALS) on this dataset. Since it takes int keys for users and products, my first step is to assign a unique int to each user and product, and transform the dataset above so that users and products are represented by ints.
Here's what I've tried so far:
val rawInputData = sc.textFile(params.inputPath)
.filter { line => !(line contains "\\N") }
.map { line =>
val parts = line.split("\t")
(parts(0), parts(1)) // user, product
}
// find all unique users and assign them IDs
val idx1map = rawInputData.map(_._1).distinct().zipWithUniqueId().cache()
// find all unique products and assign IDs
val idx2map = rawInputData.map(_._2).distinct().zipWithUniqueId().cache()
idx1map.map{ case (id, idx) => id + "\t" + idx.toString
}.saveAsTextFile(params.idx1Out)
idx2map.map{ case (id, idx) => id + "\t" + idx.toString
}.saveAsTextFile(params.idx2Out)
// join with user ID map:
// convert from (userStr, productStr) to (productStr, userIntId)
val rev = rawInputData.cogroup(idx1map).flatMap{
case (id1, (id2s, idx1s)) =>
val idx1 = idx1s.head
id2s.map { (_, idx1)
}
}
// join with product ID map:
// convert from (productStr, userIntId) to (userIntId, productIntId)
val converted = rev.cogroup(idx2map).flatMap{
case (id2, (idx1s, idx2s)) =>
val idx2 = idx2s.head
idx1s.map{ (_, idx2)
}
}
// save output
val convertedInts = converted.map{
case (a,b) => a.toInt.toString + "\t" + b.toInt.toString
}
convertedInts.saveAsTextFile(params.outputPath)
When I try to run this on my cluster (40 executors with 5 GB RAM each), it's able to produce the idx1map and idx2map files fine, but it fails with out of memory errors and fetch failures at the first flatMap after cogroup. I haven't done much with Spark before so I'm wondering if there is a better way to accomplish this; I don't have a good idea of what steps in this job would be expensive. Certainly cogroup would require shuffling the whole data set across the network; but what does something like this mean?
FetchFailed(BlockManagerId(25, ip-***.ec2.internal, 48690), shuffleId=2, mapId=87, reduceId=25)
The reason I'm not just using a hashing function is that I'd eventually like to run this on a much larger dataset (on the order of 1 billion products, 1 billion users, 35 billion associations), and number of Int key collisions would become quite large. Is running ALS on a dataset of that scale even close to feasible?
I looks like you are essentially collecting all lists of users, just to split them up again. Try just using join instead of cogroup, which seems to me to do more like what you want. For example:
import org.apache.spark.SparkContext._
// Create some fake data
val data = sc.parallelize(Seq(("userA", "productA"),("userA", "productB"),("userB", "productB")))
val userId = sc.parallelize(Seq(("userA",1),("userB",2)))
val productId = sc.parallelize(Seq(("productA",1),("productB",2)))
// Replace userName with ID's
val userReplaced = data.join(userId).map{case (_,(prod,user)) => (prod,user)}
// Replace product names with ID's
val bothReplaced = userReplaced.join(productId).map{case (_,(user,prod)) => (user,prod)}
// Check results:
bothReplaced.collect()) // Array((1,1), (1,2), (2,2))
Please drop a comments on how well it performs.
(I have no idea what FetchFailed(...) means)
My platform version : CDH :5.7, Spark :1.6.0/StandAlone;
My Test Data Sizeļ¼š31815167 all data; 31562704 distinct user strings, 4140276 distinct product strings .
First idea:
My first idea is to use collectAsMap action and then use the map idea to change the user/product string to int . With driver memory up to 12G , i got OOM or GC overhead exception (the exception is limited by driver memory).
But this idea can only use on a small data size, with bigger data size , you need a bigger driver memory .
Second idea :
Second idea is to use join method, as Tobber proposaled. Here is some test result:
Job setup:
driver: 2G , 2 cpu;
executor : (8G , 4 cpu) * 7;
I follow the steps:
1) find unique user strings and zipWithIndexes;
2) join the original data;
3) save the encoded data;
The job take about 10 minutes to finish.

Resources