Is it possible to intercept the SQL message between Power BI Ask a Question front end and back end? In order to control the answers that are presented in visualisations, so that small numbers are always suppressed? This is in the health setting where privacy is important.
For example, the NQL question: "how many patients were admitted by ward on xx/xx/2017?" will produce a bar chart showing record count by ward. If any of the values are less than 5, I want to be able to suppress that value by rounding it up to 5.
I have tried a custom measure (let's call it "admissions"), but that is only invoked if the user specifically uses the name of the measure in the query. So, "total admissions by ward..." works ok, but if the word admissions is left out, then it just shows a record count ("total by ward...")
Related
I'm using Power Query to pull data from a folder with between 2 and 40 excel files. One of the cells I'm pulling includes an explanation, sometimes a lengthy explanation, but Power Query is only displaying 2-3 sentences and not the entire thread.
When I am setting Power Query up, the preview is showing a truncated version of the source explanation. Accordingly, the final PQ output is truncated. However, I would like PQ to display the entire explanation.
Source explanation:
"Joe went out to Orica (On the last day of the month) to bill out safety that they were in need of. Jamie Kennedy (GAM) is working on getting their safety fill list put together. Some of the pricing was not entered correctly/some of the items were not allowed to be billed from the PRC (but they were allowed on their fill list). Joe only needed $460 to hit his safety goal - and with these items he would have exceeded the goal. The price update would not go through until Friday, so we were unable to backdate the invoices. Email exchange between myself, and Jamie are attached. I also attached a copy of the handwritten, this is the sorbent pads that are 55.75each - the SSR dropped off 12. ($669)"
Power Query output:
"Joe went out to Orica (On the last day of the month) to bill out safety that they were in need of. Jamie Kennedy (GAM) is working on getting their safety fill list put together. Some of the pricing was not entered correctly/some of the items were not al"
I want the Power Query output to match the Source Explanation.
I work for a hospital that is part of a larger network. We were recently asked by our corporate overlords to address the use of a specific laboratory test. in general, this test should only be performed daily, which should be considered to corresponded to a 24 hour period from last draw. sometimes, however, based on when people arrive to the hospital (e.g. 7pm), and in the interest of bundling labs for a single draw, they may be drawn sooner to coincide with routine testing i.e. 5am. it would never be necessary to otherwise need to repeat within a short (8 hour) window, particularly on the same day.
we have been asked to validate to see if we are adhering to this general practice, as testing any more frequent than that, say, within 12h of a previous test, has no real clinical value and thus adds unnecessary cost.
To address this issue I was given a dataset that among other items includes all instances the lab was performed including collection date and time.
please see HIPPA-safe example below (to be clear, no real data and identifiers are not real); the actual dataset has over 4,174 entries corresponding to 1,328 unique persons. everyone had at least one test performed, not everyone had >1.
I THINK what I want to do is an IF formula that reads the antecedent cell to 1) check if same person and 2) if so, perform a subtraction of the time stamp to display the relevant difference in time, which I can then filter, create histogram, etc. does this seem like a reasonable approach? is there a more preferable method to facilitate analysis? do any other forms of analysis come to mind?
=IF(B2=B1, D2-D1, "n/a")
example data set with formula:
any other forms of analysis come to mind?
By the looks of it you should consider taking the values under "Results" into account, assuming there is a band that might be considered 'normal' readings. The "one in 24 hours is sufficient" rule of thumb may well be appropriate for a series of values within the 'normal' band but not so much so if readings are close to 'danger level'.
That is, in some cases a higher than 'standard' frequency of monitoring may be in the patient's interest, even if not hospital policy, so it may be worth separating the "less than 24 hours interval" readings into those where the higher frequency provided information of little value (eg readings remaining within a 'normal' band) from any that crossed into or out of the band and/or large changes in value. This though may be more a matter of statistical analysis than programming and depend upon whether any action might be taken as a result of such "extra" readings.
How to Pass a Calculated Member/measure to a Drill-thru Target Report
In order to avoid using Calculated Members--because from googling some people were saying you could not pass them via Drill-thru--I went back to my FM model and created 3 new measures (High Risk, Low Risk and Medium Risk). Now these will show up in the drill-thru definitions parameter list . . . my only problem is that how can I do a check to see which of the three measures has been selected by a user?
Remember, I basically have a line chart with 3 lines, one for each measure above (High, Medium or Low Risk) by time frame. A user will select a data point, High risk for March or Medium Risk for Semester 2, for example. I then need to pass the value for that datapoint to my Target (2nd) report. How can I check for which of the three measure values they passed through?!?
I have this dashboard. In sheet two , the bar chart is just counting the number of marks
in each range specified in the x-axis. What I actually want is a bar chart according to the same range, but it should count the average of marks of each student. In sheet 3, the bar chart looks similar to what I expect, but if you take a look, it's just adding each average of student one above the another.
So, how can I make a char bart with frequency of students average of marks. The ranges should be: [0 , 5>,[5,10>, [10,15>, [15,20].
One solution is to create a custom SQL data connection to first calculated the avg NOTA for each student as below:
select NOMBRES, avg(NOTA) as avg_nota from YOUR_TABLE group by NOMBRES
Then you can create a histogram for avg_nota, either with Show Me or manually.
Here is a link to an example based on your original
The SQL above weighs each score equally, which is fine if each course has exactly the same number of grades. But if the number of records varies between courses, you should adjust the approach to make sure each course is weighted the same (e.g. so that a course with 10 small tests does not get weighted twice as much as a course with 5 larger tests). The solution in that case, might involve repeating the above step in a nested subquery or view grouping by both NOMBRE and CURSO. Still this simple approach should give you the basic idea.
The solution above works but I think there ought to be a way to get the same effect using table calculations without resorting to custom SQL
Here is another question I have about being able to calculate this scenario in Access, or even at all for that matter:
I have a query that find the TOP 5 items sold in a given timeframe, and it groups by site. I use this to create a comparative chart between the site for ppt presentations. I do a lot of these but I have a problem with the presentation that I foresee they will have a problem with and it makes for bad metrics:
Some stores are bigger than others, and get much more supply. So a straight aggregate total of just qty of toping selling items, and comparing the locations is stacking the deck a little.
So if Site A gets 80% of the supply, and sells 500, Site B gets 15% supply and sell 75, and site C get 5% supply and sells 50 items, then Site C actually has the best sales for their size. I have exactly what I need in terms in the first chart (from my queries and such) to show the aggregate total, but what do I need to represent the idea mentioned above.
The factors that I have that go into this are:
ItemID - group by
Item - group by
qty sold - sum/descending (which is the variable that determines the Top 5)
Store/Location - Group By
and then I run a seperate query to get the total deliveries (supply) to each site
I realize that this may just be a lack of mathmatical understanding on my part, but can anyone help with this?
thanks
The first issue that I see isn't about SQL savvy; it's how to serve your data customer. What does he or she want to see? Metrics is a term with a holy ring, and for good reason: it's supposed to be what is used for the big business decisions, and it's scary easy to measure the wrong thing.
So I'd make sure I know what my customer wants. If you can't model it on a spreadsheet, you won't be able to develop your reporting effectively.
Every deck of cards is loaded. You have to know how they want it loaded.