Starting from ActivePivot Sandbox 4.3.2 I changed my object feeding into the cube and redefined the fields, dimensions and measures of the cube. When I start the cube I see no error message in the logs.
However when I connect to the cube using ActivePivot Live 2.6.2 or Excel 2010 and run the following MDX query:
SELECT FROM [cubeName] WHERE ([Measures].[contributors.COUNT])
I see an empty pivot table, what can be the cause? How can it be diagnosed?
The most common reason for empty pivot tables on a non empty cube is the presence of slicing dimensions. If you have:
2 slicing dimensions 'A' and 'B'
a first fact contributing 'b' on 'A' and 'a' on 'B'
a second fact contributing 'a' on 'A' and 'b' on 'B'
then default members will be 'a' along 'A' and 'a' along 'B'. The query you described would then return an empty pivot table as there is no fact with 'a' along 'A' and 'a' along 'B'.
The second main reason is security filtering. You should retry with a user wihtout any access restrictions. This is easily feasible by requesting the query through the dedicated operation on the ActivePivotManager monitoring bean.
Of course, you should first check your cube is non empty (through the JConsole).
Related
I am currently using the Azure Data Factory to retrieve a fixed-length file from blob storage and trying to import the record into my database.
Fixed-length.txt
0202212161707
1Tom
1Kelvin
1Michael
23
The first row is the header record, which is start with '0' and comes up with the creation time.
The following row are the detail record, started with '1' and comes up with user name.
The last row is the end record, which started with '2' and comes up with the sum of the detail record.
However, I want to validate that the data of the file is correct before I insert those records. I would like to check if the checksum is correct first, and then only insert all those record started with 1.
Currently, I insert all those record line by line into SQL DB and run a stored procedures to perform the tasks. Is it possible to utlize Azure Data Factory to do it?? Thank you.
I reproduced your issue follow below steps.
First take one look up activity to view all the data from file and apply filter on that data.
Then take one set variable activity and get the last row's last element e.g. 23 as 3 with below dynamic expression.
#last(activity('Lookup1').output.value[sub(length(activity('Lookup1').output.value),1)].Prop_0)
Then take one Filter activity to filter rows with 1 prefix with below items value and condition
items : #activity('Lookup1').output.value
condition : #startswith(item().Prop_0,'1')
after filter take ForEach activity to Append those values in an array
Then inside for each activity take Append variable activity it will create an array with filtered values.
Now take If condition with expression which checking value of set variable 1 and length of Append array variable is same or not.
#equals(int(variables('sum')),length(variables('username')))
Then inside true condition, add your copy activity to copy data if condition is true
My Sample Output:
0202212161707
1Tom
1Kelvin
23
for above data control is going to false condition.
0202212161707
1Tom
1Kelvin
1Michael
23
for above data control is going to true condition.
I’m using ADODB to query on Sheet1. If I fetch the data using SQL query on the sheet as below without grouping I’m getting all characters from comment.
However, if I use grouping my characters are truncated to 255.
Note – My first row contains 800 len of characters so drivers have identified the datatype correctly.
Here is my query output without grouping
Select Product, Value, Comment, len(comment) from [sheet1$A1:T10000]
With grouping
Select Product, sum(value), Comment, len(comment) from [sheet1$A1:T10000] group by Product, Comment
Thanks for posting this! During my 20+ years of database development using ADO recordsets I had never faced this issue until this week. Once I traced the truncation to the recordset I was really scratching my head. Couldn't figure how/why it was happening until I found your post and you got me focused on the GROUP BY. Sure enough, that was the cause (some kind of ADO bug I guess). I was able to work around it by putting correlated scalar sub-queries in the SELECT list, vice using JOIN and GROUP BY.
To elaborate...
At least 9 times out of 10 (in my experience) JOIN/GROUP BY syntax can be replaced with correlated scalar subquery syntax, with no appreciable loss of performance. That's fortunate in this case since there is apparently a bug with ADO recordset objects whereby GROUP BY syntax results in the truncation of text when the string length is greater than 255 characters.
The first example below uses JOIN/GROUP BY. The second uses a correlated scalar subquery. Both would/should provide the same results. However, if any comment is greater than 255 characters these 2 queries will NOT return the same results if an ADODB recordset is involved.
Note that in the second example the last column in the SELECT list is itself a full select statement. It's called a scalar subquery because it will only return 1 row / 1 column. If it returned multiple rows or columns an error would be thrown. It's also known as a correlated subquery because it references something that is immediately outside its scope (e.emp_number in this case).
SELECT e.emp_number, e.emp_name, e.supv_comments, SUM(i.invoice_amt) As total_sales
FROM employees e INNER JOIN invoices i ON e.emp_number = i.emp_number
GROUP BY e.emp_number, e.emp_name, e.supv_comment
SELECT e.emp_number, e.emp_name, e.supv_comments,
(SELECT SUM(i.invoice_amt) FROM invoices i WHERE i.emp_number = e.emp_number) As total_sales
FROM employees e
I am struggling with data order of Cassandra data. I have a table like this
tbl_data
- yymmddhh (text)
- data (text)
parting key is 'yymmddhh'
I am adding data like this
'16-11-17-01', 'a'
'16-11-17-01', 'b'
'16-11-17-02', 'c'
'16-11-17-03', 'xyz'
'16-11-17-03', 'e'
'16-11-17-03', 'f'
select * from tbl_data limit 10;
I am expecting data in the order in which I added data. But it is giving data like this
'16-11-17-03', 'f'
'16-11-17-03', 'e'
'16-11-17-01', 'a'
i.e. latest record first or some random order. I need data in the same order in which I added. I am not able to figure out the default order of the data in my case. Also I don't want to pass partition key in where condition because its overhead to remember that value for me. Kindly suggest me the solution.
I'm afraid you will struggle forever on this.
As per comments, you can't decide the order "outside" a partition, unless you really understand what you're doing by changing the partitioner.
Please have a read at the suggested link, and at this and this SO answers to understand why you are getting your records in this specific order (yes, they ARE ordered...).
A possible solution, however, is to add a timestamp clustering key, and change the partition key to a simpler "yymmdd":
tbl_data
- yymmdd (timestamp)
- hhmmssMMM (timestamp)
- data (text)
Now you'd store data on day by day basis (that is you need to know the day you are querying data for), and the order of your data inside each partition (that is each day) is sorted by the timestamp column, so for your requirements you'd store there the insertion time of the record.
Now, if you don't insert data every day, you really need to keep track the insertion dates into another (very simple) table:
CREATE TABLE inserted_days (
yymmdd timestamp PRIMARY KEY
);
Issuing a
SELECT * FROM inserted_days
would scan all this partition, returning records in random order (from you app point of view, so you need to sort it), but here we are talking of 365 records in year, something you don't need to worry about. It's easy to do and you'd not incur into unmanageable overheads.
HTH.
I have a list where I am getting all data approx. 50 columns with foreign key tables data through a model.
my second list where I am getting only columnname in rows approx. 10 columname,it can be more acc. to condition ,
now I want to bind a table using 2nd table column name and data will come from first list..because that columns are available in 1st list..so I want only those column name using mvc.. is it possible ?
note : I am using this things for viewsettings acc. to diff. diff. view which we are creating ...so please tell me how can I map rows of 2nd list with column of 1st list using mvc query...
I have run into performance problems with MDX measure calculations for a summary report, using SQL Server 2008 R2.
I have a Person dimension, and a related fact table containing multiple records per person. (Qualifications)
Eg [Measures].[Other Qual Count] would give me the number of qualifications of a certain type.
Each person could have multiple, so [Measures].[Other Qual Count] > 1 for one person.
However on my summary report I would like to indicate this as 1 per person only. (To indicate the number of persons with Other qualifications.)
The summary report rolls up the values against some other dimensions including an unknown Region hierarchy (it can be one of 3 hierarchies).
I have done this as follows:
MEMBER [Measures].[Other Count2]
AS
SUM(
EXISTING [Person].[Staff Code].[Staff Code].Members,
IIF([Measures].[Other Count] > 0, 1, NULL)
)
However, I have to create several more derived measures - deriving from each other, and all at Person level to avoid unwanted multiple counts. The query slows down from <1 second to 1min+ (my goal is <3s).
The reason for all the derivations is a lot of logic to determine within which one of 6 mutually exclusive column a person will be reported in.
I have also tried to create a Cube Calculation, but this gives me the same value as [Other Count].
SCOPE (({[Person].[Staff Code].[Staff Code].MEMBERS}, [Measures].[Has Other Qual]));
THIS = ([Person].[Staff Code].[Staff Code], [Measures].[Has Other Qual]).Count;
END SCOPE;
Is there a better MDX/Cube calculation that can be used, or any suggestions on improving performance?
This is unfortunately my first time working with MDX and ran into this problem close to a deadline, so I am trying to make this work if possible without changes to the cube.
I have resolved the issue by changing the cube, which was simpler than expected.
On the Data Source View, I created a named query which summarizes the existing fact table at Person level. I also derive all the columns which I will need on my reports.
Treating this named query as a separate fact table, I added a measure group for it and that resolved all my problems.