Flurry Parameter limitation - flurry

In my application i have an event which has been occurred for more than 1000 times when i am trying to retrieve that event along with the parameters i am getting only 500 parameter values only.
Is there any way to get all the records?
Api call:
http://api.flurry.com/eventMetrics/EventapiAccessCode=APIACCESSCODE&apiKey=APIKEY&startDate=STARTDATE&endDate=ENDDATE&eventName=EVENTNAME&versionName=VERSIONNAME
Thanks

The Event parameter show up the top 500 values. All parameter values beyond the top 500 get grouped into 'Others'.
(Full disclosure: I work in the Support team at Flurry)

It seems there is no way to drill down to the info beyond the top 500 values.
There is currently no way to drill down beyond the Top 500 values. If
you require additional information, one best practice is to create
groupings for the Parameter values (e.g. if the values can be
individual numbers, for example, create groups such as ‘1-10,’ ‘11-20’
and so on, rather than storing each individual value).
https://developer.yahoo.com/flurry/docs/faq/events/

Related

Original Estimates in Azure DevOps - Max value?

Our evolution of using DevOps is continuing (slowly but surely). One thing we've noticed is that some people are trying to but excessive estimates in for their time, but what we really want to be encouraging is for people to be breaking work down into multiple tasks.
Is there a way that we can set our DevOps work items to only accept a maximum value? I've had a look at the 'rules' and there doesn't seem to be anything there to let us do this, and because it's an out of the box field I don't think we can put a value limit against it.
I suppose what I want to understand is whether it would be possible to do this in some way? Could I do something with the existing 'Original Estimate' field or would I have to create a new custom field to have any chance of preventing people from putting in 100 hours for something that's actually more like 2?
If you are also using Boards, you could highlight work items where the original estimate is higher than a certain value. This would not prevent setting these values, but rather encourage the users to put in lower values.
https://learn.microsoft.com/en-us/azure/devops/boards/boards/customize-cards?view=azure-devops
Beware that this might not really help the underlying issue: People must be convinced of the benefits of splitting up tasks, otherwise they will just work around the tooling. Like always putting in the maximum value or not putting in the actual work hours.
Is there a way that we can set our DevOps work items to only accept a
maximum value?
I am afraid that setting the value limit for the Original Estimates field is currently not supported.
As workaround, you could need to create a custom field of type Picklist, and then specify the available values in the picklist.
You could add your request for this feature on our UserVoice site , which is our main forum for product suggestions.After suggest raised, you can vote and add your comments for this feedback. The product team would provide the updates if they view it.

Azure Search Distance filter with variable distance

Suppose I have the following scenario:
A search UI to allow individuals to find plumbers who are able to service their home location.
When a plumber enters their info into the system, they provide their coordinates and a maximum distance they are willing to travel.
The individual can then enter their home coordinates and should be presented with a list of plumbers who are eligible.
Looking at the Azure Search geo.distance function, I cannot see how to do this. Scenarios where the searcher provides a distance are well covered but not where the distance is different for each search record.
The documentation provides the following example:
$filter=geo.distance(location, geography'POINT(-122.131577 47.678581)') le 10
This works correctly but if I try and change the 10 to the maxDistance field, it fails with
Comparison must be between a field, range variable or function call
and a literal value
My requirement seems fairly basic but am now wondering if this is currently possible with Azure Search?
I found an azure feedback suggestion asking for this feature but no news on if/when it will be implemented. Therefore it is safe to assume that this scenario is not currently supported.
To add to Paul's answer, one possible workaround is to use a conservatively large constant value instead of referencing the maxDistance field in your $filter expression. Then, you can filter the resulting list of plumbers on the client to take each plumber's max distance into account and produce final list of plumbers.

Is there a limit to how many Multiple Response Sets I can make on SPSS?

I am preparing a dataset on SPSS to analyze a survey I prepared on Limesurvey. This survey happens to have lots of multiple response set questions.
I have already done 20 multiple response sets via Analyze >>> Multiple Response >>> Define Variables. However, when I come to add more, the option to create another multiple response set is no longer present, even though I have inputted all the required info.
So, does SPSS have a limit on how many multiple response sets can be made or am I doing something wrong?
Also, what other alternatives are there?
There is no limit on the number of MR sets you can define. Be sure that you have entered all the required information in order to enable the Add button in the dialog. Note that an MR set must have at least two variables.
If you can't get the dialog to work, you can define the sets via syntax, which would be faster anyway. For example
MRSETS
/MDGROUP NAME=$health LABEL="Health status"
VARIABLES=hlth1 hlth2 hlth3 hlth4 hlth5
VALUE=1.

Is there a way to have sub-transactions in Gatling?

Requested page returns multiple results, response time of the requested page accordingly varies depending on the number of results.
With Gatling I'm having one transaction with all response times in it, in addition I'd like to have sub-transactions depending on the range of results for example:
BuildTable (10Txs)
BuildTable_0_10 (2Txs)
BuildTable_10_100 (6Txs)
BuildTable_100_all (2Txs)
The main goal to have this break-down visible in report, Any idea how can I reach this?
The way to have "transactions" in Gatling is to use "groups". But they wrap the delimited sequence of requests, so its name is computed before entering the sequence, hence in your case, before knowing the number of results.
So the only way is to know beforehand the number of expected results, e.g. like having this information in a feeder along some search keywords, and either switch to different branches, or compute the group name dynamically with a function.

Dynamics CRM 2011 Import Data Duplication Rules

I have a requirement in which I need to import data from excel (CSV) to Dynamics CRM regularly.
Instead of using some simple Data Duplication Rules, I need to implement a point system to determine whether a data is considered duplicate or not.
Let me give an example. For example these are the particular rules for Import:
First Name, exact match, 10 pts
Last Name, exact match, 15 pts
Email, exact match, 20 pts
Mobile Phone, exact match, 5 pts
And then the Threshold value => 19 pts
Now, if a record have First Name and Last Name matched with an old record in the entity, the points will be 25 pts, which is higher than the threshold (19 pts), therefore the data is considered as Duplicate
If, for example, the particular record only have same First Name and Mobile Phone, the points will be 15 pts, which is lower than the threshold and thus considered as Non-Duplicate
What is the best approach to achieve this requirement? Is it possible to utilize the default functionality of Import Data in the MS CRM? Is there any 3rd party Add-on that answer my requirement above?
Thank you for all the help.
Updated
Hi Konrad, thank you for your suggestions, let me elaborate here:
Excel. You could filter out the data using Excel and then, once you've obtained a unique list, import it.
Nice one but I don't think it is really workable in my case, the data will be coming regularly from client in moderate numbers (hundreds to thousands). Typically client won't check about the duplication on the data.
Workflow. Run a process removing any instance calculated as a duplicate.
Workflow is a good idea, however since it is being processed asynchronously, my concern is the user in some cases may already do some update/changes to the data inserted, before the workflow finish working.. therefore creating some data inconsistency or at the very least confusing user experience
Plugin. On every creation of a new record, you'd check if it's to be regarded as duplicate-ish and cancel it's creation (or mark for removal).
I like this approach. So I just import like usual (for example, to contact entity), but I already have a plugin in place that getting triggered every time a record is created, the plugin will check whether the record is duplicat-ish or not and took necessary action.
I haven't been fiddling a lot with duplicate detection but looking at your criteria you might be able to make rules that match those, pretty much three rules to cover your cases, full name match, last name and mobile phone match and email match.
If you want to do the points system I haven't seen any out of the box components that solve this, however CRM Extensions have a product called Import Manager that might have that kind of duplicate detection. They claim to have customized duplicate checking. Might be worth asking them about this.
Otherwise it's custom coding that will solve this problem.
I can think of the following approaches to the task (depending on the number of records, repetitiveness of the import, automatization requirement etc.) they may be all good somehow. Would you care to elaborate on the current conditions?
Excel. You could filter out the data using Excel and then, once you've obtained a unique list, import it.
Plugin. On every creation of a new record, you'd check if it's to be regarded as duplicate-ish and cancel it's creation (or mark for removal).
Workflow. Run a process removing any instance calculated as a duplicate.
You also need to consider the implication of such elimination of data. There's a mathematical issue. Suppose that the uniqueness' radius (i.e. the threshold in this 1D case) is 3. Consider the following set of numbers (it's listed twice, just in different order).
1 3 5 7 -> 1 _ 5 _
3 1 5 7 -> _ 3 _ 7
Are you sure that's the intended result? Under some circumstances, you can even end up with sets of records of different sizes (only depending on the order). I'm a bit curious on why and how the setup came up.
Personally, I'd go with plugin, if the above is OK by you. If you need to make sure that some of the unique-ish elements never get omitted, you'd probably best of applying a test algorithm to a backup of the data. However, that may defeat it's purpose.
In fact, it sounds so interesting that I might create the solution for you (just to show it can be done) and blog about it. What's the dead-line?

Resources