Create a total for attribute not in report - cognos

I have created for my user a crosstab report that represents a summary of average test times for electronic components. The attributes are:
COMPONENT, WEEKTEST BEGIN DATE, TEST NAME, AUTOTESTER SOFTWARE VERSION, TEST TIME IN SECS, AVG TEST TIME IN SECS
This is my crosstab layout
WEEKTEST BEGIN DATE
COMPONENT TESTNAME AUTOTESTERSW TOTAL(AVG TEST TIME IN SECS)
TOTAL BY COMPONENT
When I add a total to my report, It adds all the records for average test time() as you would expect. However my user wants the total to be for Test Time In Seconds instead.
I tried modifying the Total calculation from "total(currentMeasure within detail [COMPONENT])" to "total(currentMeasure within set [COMPONENT])" but the results made no sense.
I also tried changing the calculation to "total(currentMeasure within set [TEST TIME IN SECS])" but it failed: Report Studio threw an error because this attribute is not in the report (even though it is in the query).
Can this be done at all?
Cognos Report Studio 8.4
IBM DB2 UDB
Thanks in advance for your help.

Related

Calculate project work risk completion rate

Calculate project work risk completion rate (in excel) based on
Deadline
Work effort/work-effort weight
of tasks completed
Given
Project Unit: Milestone-1 - Milestone-4, MVP-1 (includes All milestones)
Work effort for each milestone (e.g. Milestone-1 = 3 points or small work effort, Milestone-2 = 5 or medium, Milestone-3 = 8 or large)
Work-effort weight for each Milestone (in %)
% or # of Completed tasks per a Milestone
How do I include the time in the equation? Say we have a start date and projected due date or duration (e.g. x weeks or days or months), I need to calculate the risk of completing a task (milestone and the entire MVP) on time based on the current # of tasks completed.
In other words, what is the risk (small/medium/large) that a Milestone/Milestones/MVP will be completed on time (say, Mar-31, 2023) based on the number of tasks completed (15 of 40)?
Please let me know if I need to clarify
I really appreciate any help you can provide.
The image is missing the time/deadline value so the risk is inaccurate current view

MS Projects show task always in timesheet

I am currently using microsoft projects 2016 with timesheets.
How can i create a task which is always shown in my timesheet with 0 planned work time?
I have administrative tasks beside my project tasks which i can't plan on an explicite date.
I have already tried to do this with an administrative time categorie, but this is shown for every user.
Is this currently supported?
How could i implement this feature by myself throug an addon?
I've found a workaround for this. You can create a task with any duration (for example from 01.10.2016 to 31.12.2100) and with a total work amount of 0.1 hours. Projects will plan 0 hours every day for this duration and the task is always shown in your timesheet.
Creating the task:
Appears in your timesheet:

Tuleap - Estimated Time Response

Is it possible to calculate the time based response in Tuleap. What i mean is that when a task was submitted, Tuleap by default can capture the submitted on value (date and time), now i want to know whether Tuleap can set up the estimated time value by +2 or +3 hrs from the submitted on data. So that the end user will know this task has to be completed with in 2 or 3 hours. Estimate time value can be triggered based on some other input.
As far as I know, the smallest unit time available is day, so this does not seem to be feasable. Tuleap is more of a tracking tool (what happend) than a managing tool (what should happend).

Cognos Report Studio: DMB-ECB-0088 A DMB cube build limit has been exceeded

I'm running a report (Snapshot attached).
If I filter the dates range for 1-6 months it went well. But if I filter it for a year the report throws an error. DMB-ECB-0088 A DMB cube build limit has been exceeded.
I searched over for this error, and find nothing constructive. User requirement is for a year or two. And I was confident that simple report can handle huge data. Its not an Active report.
Already Tried:
I've tried turning on the Local Cache.
And also changed the MaxCacheSize value to 1200 from default 400 in qfs_config.xml file.
I'm not sure if its an issue at DB's end. My DWH source is DB2.
Any help would be much appreciated.
Thanks,
Nuh

Tracking metrics using StatsD (via etsy) and Graphite, graphite graph doesn't seem to be graphing all the data

We have a metric that we increment every time a user performs a certain action on our website, but the graphs don't seem to be accurate.
So going off this hunch, we invested the updates.log of carbon and discovered that the action had happened over 4 thousand times today(using grep and wc), but according the Integral result of the graph it returned only 220ish.
What could be the cause of this? Data is being reported to statsd using the statsd php library, and calling statsd::increment('metric'); and as stated above, the log confirms that 4,000+ updates to this key happened today.
We are using:
graphite 0.9.6 with statsD (etsy)
After some research through the documentation, and some conversations with others, I've found the problem - and the solution.
The way the whisper file format is designed, it expect you (or your application) to publish updates no faster than the minimum interval in your storage-schemas.conf file. This file is used to configure how much data retention you have at different time interval resolutions.
My storage-schemas.conf file was set with a minimum retention time of 1 minute. The default StatsD daemon (from etsy) is designed to update to carbon (the graphite daemon) every 10 seconds. The reason this is a problem is: over a 60 second period StatsD reports 6 times, each write overwrites the last one (in that 60 second interval, because you're updating faster than once per minute). This produces really weird results on your graph because the last 10 seconds in a minute could be completely dead and report a 0 for the activity during that period, which results in completely nuking all of the data you had written for that minute.
To fix this, I had to re-configure my storage-schemas.conf file to store data at a maximum resolution of 10 seconds, so every update from StatsD would be saved in the whisper database without being overwritten.
Etsy published the storage-schemas.conf configuration that they were using for their installation of carbon, which looks like this:
[stats]
priority = 110
pattern = ^stats\..*
retentions = 10:2160,60:10080,600:262974
This has a 10 second minimum retention time, and stores 6 hours worth of them. However, due to my next problem, I extended the retention periods significantly.
As I let this data collect for a few days, I noticed that it still looked off (and was under reporting). This was due to 2 problems.
StatsD (older versions) only reported an average number of events per second for each 10 second reporting period. This means, if you incremented a key 100 times in 1 second and 0 times for the next 9 seconds, at the end of the 10th second statsD would report 10 to graphite, instead of 100. (100/10 = 10). This failed to report the total number of events for a 10 second period (obviously).Newer versions of statsD fix this problem, as they introduced the stats_counts bucket, which logs the total # of events per metric for each 10 second period (so instead of reporting 10 in the previous example, it reports 100).After I upgraded StatsD, I noticed that the last 6 hours of data looked great, but as I looked beyond the last 6 hours - things looked weird, and the next reason is why:
As graphite stores data, it moves data from high precision retention to lower precision retention. This means, using the etsy storage-schemas.conf example, after 6 hours of 10 second precision, data was moved to 60 second (1 minute) precision. In order to move 6 data points from 10s to 60s precision, graphite does an average of the 6 data points. So it'd take the total value of the oldest 6 data points, and divide it by 6. This gives an average # of events per 10 seconds for that 60 second period (and not the total # of events, which is what we care about specifically).This is just how graphite is designed, and for some cases it might be useful, but in our case, it's not what we wanted. To "fix" this problem, I increased our 10 second precision retention time to 60 days. Beyond 60 days, I store the minutely and 10-minutely precisions, but they're essentially there for no reason, as that data isn't as useful to us.
I hope this helps someone, I know it annoyed me for a few days - and I know there isn't a huge community of people that are using this stack of software for this purpose, so it took a bit of research to really figure out what was going on and how to get a result that I wanted.
After posting my comment above I found Graphite 0.9.9 has a (new?) configuration file, storage-aggregation.conf, in which one can control the aggregation method per pattern. The available options are average, sum, min, max, and last.
http://readthedocs.org/docs/graphite/en/latest/config-carbon.html#storage-aggregation-conf

Resources