Rally CumulativeFlowData endpoint showing data > 24hours old - rallyapi

Using the CumulativeFlowData diagram available in Rally we can see a chart includes data up to the current day. When pulling data from the Rally API we can only get data for up to 24 hours earlier.
We have tried pulling CumulativeFlow data by both Iteration Id and Created date and both only provide data with a 24 hour delay.
Does anyone know why there is such a big lag with the data being available via the API?
https://rally1.rallydev.com/slm/webservice/v2.0/iterationcumulativeflowdata?workspace=<<myworkspace>>&project=https://rally1.rallydev.com/slm/webservice/v2.0/project/<<myprojectId>>&query=(IterationObjectID = <<myIterationId>>)&fetch=CreationDate,CardEstimateTotal,CardState&start=1&pagesize=200
eg.
Rally View of CFD
View of data as retried from API

Related

Prisma timesout whith large datasets

I have an app (Node.js (Fastify), postgres with prisma) that writes sales from an external onto the postgres db based on dates. Once the sales have been written the timestamps are written in a table in order to check later if that date has been queried (so if we request the sales for October 2019 it will check whether or not October 2019 has been queried before and return the sales from the db if that's the case or fetch from the external API, writes them on the db and write October 2019 on the date table for the next time).
My issue is when trying to get all the sales, which can be over several years. The way I do it right now is (please note that the only endpoint I can use with the API is year/month, so I have no other choice but to iterate my requests every month
Get the amount of months between first and last sale (for example, 97)
Loop over each month and check whether or not this month has been queried before
if it has been queried before, do nothing
If it has not been queried before, fetch this year/month combination from external API and write it on db
Once the loop has finished, get all the sales from the db in between those 2 dates
The issue I have is that while I paginated my endpoint, prisma timesout with some stores while upserting. Some months can have thousands of sales with relations for the products sold and I feel that that's where the issue is.
Here is the error message
Timed out fetching a new connection from the connection pool. More info: http://pris.ly/d/connection-pool (Current connection pool timeout: 10, connection limit: 10)"
My question is, is it my logic that is bad and should be redone, or should I not write that many objects in the database, is there a best practice I'm missing ?
I did not provide code as it is working and I feel the issue lies in the logic more than the code itself but I will happily provide code if needed.
Prisma has a connection pool, which you need to tell heroku's connection limit.
You'll need a ".profile" file in your root folder containing:
export DATABASE_URL="$DATABASE_URL?connection_limit=10&pool_timeout=0"
".profile" is like .bashrc or .zshrc. Its content will be executed on startup of your server. The line above will overwrite the standard env variable for databases on heroku.

Is there a recent or known issue with the #flurry Data Download?

Our #flurry App Data Download appears bugged.
We requested raw data for analytics recently Oct, 2rd 2020, but the result was not enough than our expected data amount. there are only a few raw data. for example we compered Arbitrary period old which got around Sept 11th to new which got after 5th Oct.
around Sept 11th data is 16MB
after Oct 5th data is 18.6kB
Above data is same period and same data choice.
There is few raw data which is reported but also there is enough event counts on the Flurry Analytics. the every data graph is normal.
Flurry analytics web site. --> about 30,000 data
Exported data --> about 60 data
It's not relate the export file format (CSV, XML, JSON).
It's same result
Add information 2020.Oct.7th
I did data download how to this below.
Flurry analytics console login
Click the Data Download of Sessions
And select application SmartSync(iOS) or SmartSync(Android)
Set Event for any period, and CSV or else.
Is this a known issue or recent bug?
If someone know the any tips or correct setting, could you please advice?
This is now fixed. Please email support if you have further difficulties.

data being overwritten when outputting data from stream analytics to powerbi

lately I've been playing around with Stream Analytics queries with PowerBI as output sink. I made a simple query which retrieves the total count of http responsecodes of our website requests over time and groups them by date and response code.
The input data is retrieved from a storage account which holds BLOB storage. This is my query:
SELECT
DATETIMEFROMPARTS(DATEPART(year,R.context.data.eventTime), DATEPART(month,R.context.data.eventTime),DATEPART(day,R.context.data.eventTime),0,0,0,0) as datum,
request.ArrayValue.responseCode,
count(request.ArrayValue.responseCode)
INTO
[requests-httpresponsecode]
FROM
[cvweu-internet-pr-sa-requests] R TIMESTAMP BY R.context.data.eventTime
OUTER APPLY GetArrayElements(R.request) as request
GROUP BY DATETIMEFROMPARTS(DATEPART(year,R.context.data.eventTime), DATEPART(month,R.context.data.eventTime),DATEPART(day,R.context.data.eventTime),0,0,0,0), request.ArrayValue.responseCode, System.TimeStamp
Since continuous export became active on 3 september 2018, I chose a job start time of 3 september 2018. Since I am interested in the statistics until today, I did not include a date interval so I am expecting to see data from 3 september 2018 until now (20 december 2018). The job is running fine without errors and I chose PowerBI as an output sink. Immediately I saw the chart being propagated starting from 3 september grouped by day and counting. So far, so good. A few days later I noticed the output dataset didnt start from 3 september anymore but from 2 December until now. Apparently data is being overwritten.
The following link says:
https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-power-bi-dashboard
"defaultRetentionPolicy: BasicFIFO: Data is FIFO, with a maximum of 200,000 rows."
But my output table does not have close to 200.000 rows:
datum,count,responsecode
2018-12-02 00:00:00,332348,527387
2018-12-03 00:00:00,3178250,3282791
2018-12-04 00:00:00,3170981,4236046
2018-12-05 00:00:00,2943513,3911390
2018-12-06 00:00:00,2966448,3914963
2018-12-07 00:00:00,2825741,3999027
2018-12-08 00:00:00,1621555,3353481
2018-12-09 00:00:00,2278784,3706966
2018-12-10 00:00:00,3160370,3911582
2018-12-11 00:00:00,3806272,3681742
2018-12-12 00:00:00,4402169,3751960
2018-12-13 00:00:00,2924212,3733805
2018-12-14 00:00:00,2815931,3618851
2018-12-15 00:00:00,1954330,3240276
2018-12-16 00:00:00,2327456,3375378
2018-12-17 00:00:00,3321780,3794147
2018-12-18 00:00:00,3229474,4335080
2018-12-19 00:00:00,3329212,4269236
2018-12-20 00:00:00,651642,1195501
EDIT: I have created the STREAM input source according to
https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-quick-create-portal. I can create a REFERENCE input as well, but this invalidates my query since APPLY and GROUP BY are not supported and I also think STREAM input is what I want according to https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-add-inputs.
What am I missing? Is it my query?
It looks like you are streaming to a Streaming dataset. Streaming datasets doesn't store the data in a database, but keeps only the last hour of data. If you want to keep the data pushed to it, then you must enable Historic data analysis option, when you create the dataset:
This will create PushStreaming dataset (a.k.a. Hybrid) with basicFIFO retention policy (i.e. about 200k-210k records kept).
You're correct that Azure Stream Analytics should be creating a "PushStreaming" or "Hybrid" dataset. Can you confirm that your dataset is correctly configured as "Hybrid" (you can check this attribute even after creation as shown here)?
If it is the correct type, can you please clarify the following:
Does the schema of your data change? If, for example, you send the datum {a: 1, b: 2} and then {c: 3, d: 4}, Azure Stream Analytics will attempt to change the schema of your table, which can invalidate older data.
How are you confirming the number of rows in the dataset?
Looks like my query was the problem. I had to use TUMBLINGWINDOW(day,1) instead of System.TimeStamp.
TUMBLINGWINDOW and System.TimeStamp produce exactly the same chart output on the frontend, but seem to be processed in a different way in the backend. This was not reflected to the frontend in any way so this was confusing. I suspect something is happening in the backend due to the way the query is processed when not using TUMBLINGWINDOW and you happen to hit the 200k row per dataset limit sooner than expected. The query below is the one which is producing the expected result.
SELECT
request.ArrayValue.responseCode,
count(request.ArrayValue.responseCode),
DATETIMEFROMPARTS(DATEPART(year,R.context.data.eventTime), DATEPART(month,R.context.data.eventTime),DATEPART(day,R.context.data.eventTime),0,0,0,0) as date
INTO
[requests-httpstatuscode]
FROM
[cvweu-internet-pr-sa-requests] R TIMESTAMP BY R.context.data.eventTime
OUTER APPLY GetArrayElements(R.request) as request
GROUP BY DATETIMEFROMPARTS(DATEPART(year,R.context.data.eventTime), DATEPART(month,R.context.data.eventTime),DATEPART(day,R.context.data.eventTime),0,0,0,0),
TUMBLINGWINDOW(day,1),
request.ArrayValue.responseCode
As we speak my stream analytics job is running smoothly and producing the expected output from 3 september until now without data being overwritten.

How to send an email with snapshot of New Relic chart data

We use New Relic and I was tasked with figuring out a way to send an email every morning with the results of our overnight load and performance testing. I need to be able to just send a snapshot of a certain time frame that the test(s) is/are ran, showing things like the throughput, web transaction time, top DB calls, etc.
You can generate dynamic permalinks for any given time window in New Relic by using UNIX time. For example:
https://rpm.newrelic.com/accounts/<your_account_id>/applications/<your_application_id>?tw%5Bend%5D=1501877076&tw%5Bstart%5D=1501875276
Adjust the tw[end] and tw[start] values to the time range you want to return.

Oracle responsys - send record to table in warehouse on event

I'm trying to figure out how to send Responsys record/s to a table in our data warehouse (MS SQL) in real time, when triggered to do so from an interaction event.
Use case is-
- Mass email is sent
- Customer X interacts with email (e.g. open, click)
- Responsys sends contact along with unique identifier (let's call it 'customer_key') and phone number to the table in the warehouse, within several minutes of customer interaction
Once in the table I can pass to our third party call centre platform.
Any help would be greatly appreciated!
Thanks
Alex
From what I know of Responsys, the most you can download interaction data is 6 times a day via the Export Event Data Feed.
If you need it more often than that I think you will to set up a filter in Responsys that checks user interactions in the last 15 mins. And then schedule a daily download per 15 mins interval via connect.
It would have to be 15 mins as you can only schedule a custom download within a 15 min window on responsys.
You'd then need to automate downloading the file, loading and importing.
I highly doubt this is responsive enough for you however!

Resources