Execution Delayed of a query with good execution plan - query-performance

One form of my application in very slow . In this form 2 query be executed. Trace level log of server show that first query is slow . in PL SQL when I execute query with same parameter , response time is less than 1 second. execution plan of application and PL is same.
I don't know what is the problem.
INFO 962.50.0.1 2017-10-17 10:54:56,144 - 33ee31c1-b81b-41c3-ad5b-20ac0a38e154 - 192.168.7.126:22704 - com.tosan.sipa.framework.security.authentication.ipAddress.TrustedIpConfigBasedAuthentication - Ip address 192.168.7.126 is valid for this request
INFO 962.50.0.1 2017-10-17 10:54:56,151 - 33ee31c1-b81b-41c3-ad5b-20ac0a38e154 - 192.168.7.126:22704 - com.tosan.backoffice.cms.connector.webservice.hessian.CMSFacadeImpl - GetDubiousCardListRequest{filter=DubiousCardFilterDto{cardPAN='null',cardIssueDate='null', traceNo='null', mainApplicationType='null', cardOwnerId=null, branchCode=null, userCode=null, cardStatus=null, holderId=1010, multipurpose=null'}}
DEBUG 962.50.0.1 2017-10-17 10:54:56,167 - 33ee31c1-b81b-41c3-ad5b-20ac0a38e154 - 192.168.7.126:22704 - org.hibernate.SQL -
select
*
from
( select
count(*) as col_0_0_
from
PSAM962.KCCARDS ecards0_,
PSAM962.KCSRVCCALL eserviceca1_,
PSAM962.KC3CNTC econtact2_
where
eserviceca1_.SCSTS=?
and eserviceca1_.SCOPRTNCOD=?
and ecards0_.CDPHYSSTS=?
and ecards0_.CDSTS<>?
and (
ecards0_.CDSTS<>?
or ecards0_.CDSTSINFO<>?
)
and ecards0_.CDCARDNO=substr(eserviceca1_.SCRSENTYKEY, instr(eserviceca1_.SCRSENTYKEY, '.')+1)
and ecards0_.SWITCHCODE=substr(eserviceca1_.SCRSENTYKEY, 1, instr(eserviceca1_.SCRSENTYKEY, '.')-1)
and eserviceca1_.SCCALLDAT=(
select
max(eserviceca3_.SCCALLDAT)
from
PSAM962.KCSRVCCALL eserviceca3_
where
eserviceca3_.SCRSENTYKEY=eserviceca1_.SCRSENTYKEY
)
and ecards0_.CDISUDT=(
select
max(ecards4_.CDISUDT)
from
PSAM962.KCCARDS ecards4_,
PSAM962.KCSRVCCALL eserviceca5_
where
ecards4_.CDCARDNO=substr(eserviceca5_.SCRSENTYKEY, instr(eserviceca5_.SCRSENTYKEY, '.')+1)
and ecards4_.SWITCHCODE=substr(eserviceca5_.SCRSENTYKEY, 1, instr(eserviceca5_.SCRSENTYKEY, '.')-1)
and eserviceca5_.SCPERTRCNO=eserviceca1_.SCPERTRCNO
)
and ecards0_.CDHLDRID=econtact2_.K3016ID
and ecards0_.CDHLDRID=?
)
where
rownum <= ?
DEBUG 962.50.0.1 2017-10-17 10:55:27,982 - 33ee31c1-b81b-41c3-ad5b-20ac0a38e154 - 192.168.7.126:22704 - org.hibernate.SQL -
select
*
from
( select
ecards0_.CDCARDNO as col_0_0_,
ecards0_.CDHLDRID as col_1_0_,
ecards0_.CBC_CFCIFNO as col_2_0_,
ecards0_.CDISUDT as col_3_0_,
ecards0_.CDEXPIRE as col_4_0_,
ecards0_.CDSTS as col_5_0_,
ecards0_.CDISSRBRNCOD as col_6_0_,
ecards0_.CDISSRUSRCOD as col_7_0_,
eserviceca1_.SCPERTRCNO as col_8_0_,
ecards0_.CDISMLTIPRPS as col_9_0_,
ecards0_.CDMAINAPPTYP as col_10_0_,
ecards0_.CDMEDTYP as col_11_0_,
econtact2_.K3016NAM as col_12_0_,
econtact2_.K3016SNAM as col_13_0_
from
PSAM962.KCCARDS ecards0_,
PSAM962.KCSRVCCALL eserviceca1_,
PSAM962.KC3CNTC econtact2_
where
eserviceca1_.SCSTS=?
and eserviceca1_.SCOPRTNCOD=?
and ecards0_.CDPHYSSTS=?
and ecards0_.CDSTS<>?
and (
ecards0_.CDSTS<>?
or ecards0_.CDSTSINFO<>?
)
and ecards0_.CDCARDNO=substr(eserviceca1_.SCRSENTYKEY, instr(eserviceca1_.SCRSENTYKEY, '.')+1)
and ecards0_.SWITCHCODE=substr(eserviceca1_.SCRSENTYKEY, 1, instr(eserviceca1_.SCRSENTYKEY, '.')-1)
and eserviceca1_.SCCALLDAT=(
select
max(eserviceca3_.SCCALLDAT)
from
PSAM962.KCSRVCCALL eserviceca3_
where
eserviceca3_.SCRSENTYKEY=eserviceca1_.SCRSENTYKEY
)
and ecards0_.CDISUDT=(
select
max(ecards4_.CDISUDT)
from
PSAM962.KCCARDS ecards4_,
PSAM962.KCSRVCCALL eserviceca5_
where
ecards4_.CDCARDNO=substr(eserviceca5_.SCRSENTYKEY, instr(eserviceca5_.SCRSENTYKEY, '.')+1)
and ecards4_.SWITCHCODE=substr(eserviceca5_.SCRSENTYKEY, 1, instr(eserviceca5_.SCRSENTYKEY, '.')-1)
and eserviceca5_.SCPERTRCNO=eserviceca1_.SCPERTRCNO
)
and ecards0_.CDHLDRID=econtact2_.K3016ID
and ecards0_.CDHLDRID=?
)
where
rownum <= ?
INFO 962.50.0.1 2017-10-17 10:55:28,106 - 33ee31c1-b81b-41c3-ad5b-20ac0a38e154 - 192.168.7.126:22704 - com.tosan.backoffice.cms.connector.webservice.hessian.CMSFacadeImpl - GetDubiousCardListResponse{DubiousCardViewList=[]}
INFO 962.50.0.1 2017-10-17 10:55:28,107 - 33ee31c1-b81b-41c3-ad5b-20ac0a38e154 - 192.168.7.126:22704 - com.tosan.backoffice.cms.connector.webservice.hessian.CMSFacadeImpl - Return value of getDubiousCardList method
PLAN:
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Byt
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | COUNT STOPKEY | | |
| 2 | VIEW | | 1 |
| 3 | SORT AGGREGATE | | 1 |
| 4 | VIEW | VM_NWVW_2 | 1 |
| 5 | FILTER | | |
| 6 | HASH GROUP BY | | 1 | 2
| 7 | FILTER | | |
| 8 | NESTED LOOPS | | 1 | 2
| 9 | NESTED LOOPS | | 1 | 2
| 10 | NESTED LOOPS | | 1 | 1
| 11 | HASH JOIN | | 1 | 1
| 12 | NESTED LOOPS | | 1 |
| 13 | INDEX UNIQUE SCAN | PK_KC3CNTC | 1 |
| 14 | TABLE ACCESS BY INDEX ROWID| KCCARDS | 1 |
| 15 | INDEX RANGE SCAN | ESH_CDHLDRID | 1 |
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
| 16 | TABLE ACCESS FULL | KCSRVCCALL | 12987 | 9
| 17 | TABLE ACCESS BY INDEX ROWID | KCSRVCCALL | 236 | 115
| 18 | INDEX RANGE SCAN | KCSRVCCALL_IDX01 | 2 |
| 19 | INDEX UNIQUE SCAN | PK_KCCARDS | 1 |
| 20 | TABLE ACCESS BY INDEX ROWID | KCCARDS | 1 |
| 21 | SORT AGGREGATE | | 1 |
| 22 | TABLE ACCESS BY INDEX ROWID | KCSRVCCALL | 1 |
| 23 | INDEX RANGE SCAN | ESH_1 | 1 |
--------------------------------------------------------------------------------

Related

How to get max value group by another column from Pandas dataframe

I have the following dataframe. I would like to get the rows where the date is max for each pipeline_name
Here is the dataframe:
+----+-----------------+--------------------------------------+----------------------------------+
| | pipeline_name | runid | run_end_dt |
|----+-----------------+--------------------------------------+----------------------------------|
| 0 | test_pipeline | test_pipeline_run_101 | 2021-03-10 20:01:26.704265+00:00 |
| 1 | test_pipeline | test_pipeline_run_102 | 2021-03-13 20:08:31.929038+00:00 |
| 2 | test_pipeline2 | test_pipeline2_run_101 | 2021-03-10 20:13:53.083525+00:00 |
| 3 | test_pipeline2 | test_pipeline2_run_102 | 2021-03-12 20:14:51.757058+00:00 |
| 4 | test_pipeline2 | test_pipeline2_run_103 | 2021-03-13 20:17:00.285573+00:00 |
Here is the result I want to achieve:
+----+-----------------+--------------------------------------+----------------------------------+
| | pipeline_name | runid | run_end_dt |
|----+-----------------+--------------------------------------+----------------------------------|
| 0 | test_pipeline | test_pipeline_run_102 | 2021-03-13 20:08:31.929038+00:00 |
| 1 | test_pipeline2 | test_pipeline2_run_103 | 2021-03-13 20:17:00.285573+00:00 |
In the expected result, we have only the runid against each pipeline_name with the max run_end_dt
Thanks
Suppose your dataframe stored in a variable named df
Just use groupby() method:-
df.groupby('pipeline_name',as_index=False)[['runid','run_end_dt']].max()
Use groupby followed by a transform. Get the indices of the rows which have the max value in each group.
idx = (df.groupby(['pipeline_name'], sort=False)['run_end_dt'].transform('max') == df['run_end_dt'])
df = df.loc[idx]

How to create a calculated column in access 2013 to detect duplicates

I'm recreating a tool I made in Excel as it's getting bigger and performance is getting out of hand.
The issue is that I only have MS Access 2013 on my work laptop and I'm fairly new to the Expression Builder in Access 2013, which has a very limited function base to be honest.
My data has duplicates on the [Location] column, meaning that, I have multiple SKUs on that warehouse location. However, some of my calculations need to be done only once per [Location]. My solution to that, in Excel, was to make a formula (see below) putting 1 only on the first appearance of that location, putting 0 on next appearances. Doing that works like a charm because summing over that [Duplicate] column while imposing multiple criteria returns the number of occurrences of the multiple criteria counting locations only once.
Now, MS Access 2013 Expression Builder has no SUM nor COUNT functions to create a calculated column emulating my [Duplicate] column from Excel. Preferably, I would just input the raw data and let Access populate the calculated fields vs also inputting the calculated fields as well, since that would defeat my original purpose of reducing the computational cost of creating my dashboard.
The question is, how would you create a calculated column, in MS Access 2013 Expression Builder to recreate the below Excel function:
= IF($D$2:$D3=$D4,0,1)
In the sake of reducing the file size (over 100K rows) I even replace the 0 by a blank character "".
Thanks in advance for your help
Y
First and foremost, understand MS Access' Expression Builder is a convenience tool to build an SQL expression. Everything in Query Design ultimately is to build an SQL query. For this reason, you have to use a set-based mentality to see data in whole sets of related tables and not cell-by-cell mindset.
Specifically, to achieve:
putting 1 only on the first appearance of that location, putting 0 on next appearances
Consider a whole set-based approach by joining on a separate, aggregate query to identify the first value of your needed grouping, then calculate needed IIF expression. Below assumes you have an autonumber or primary key field in table (a standard in relational databases):
Aggregate Query (save as a separate query, adjust columns as needed)
SELECT ColumnD, MIN(AutoNumberID) As MinID
FROM myTable
GROUP BY ColumnD
Final Query (join to original table and build final IIF expression)
SELECT m.*, IIF(agg.MinID = AutoNumberID, 1, 0) As Dup_Indicator
FROM myTable m
INNER JOIN myAggregateQuery agg
ON m.[ColumnD] = agg.ColumnD
To demonstrate with random data:
Original
| ID | GROUP | INT | NUM | CHAR | BOOL | DATE |
|----|--------|-----|--------------|------|-------|------------|
| 1 | r | 9 | 1.424490258 | B6z | TRUE | 7/4/1994 |
| 2 | stata | 10 | 2.591235683 | h7J | FALSE | 10/5/1971 |
| 3 | spss | 6 | 0.560461966 | Hrn | TRUE | 11/27/1990 |
| 4 | stata | 10 | -1.499272175 | eXL | FALSE | 4/17/2010 |
| 5 | stata | 15 | 1.470269177 | Vas | TRUE | 6/13/2010 |
| 6 | r | 14 | -0.072238898 | puP | TRUE | 4/1/1994 |
| 7 | julia | 2 | -1.370405263 | S2l | FALSE | 12/11/1999 |
| 8 | spss | 6 | -0.153684675 | mAw | FALSE | 7/28/1977 |
| 9 | spss | 10 | -0.861482674 | cxC | FALSE | 7/17/1994 |
| 10 | spss | 2 | -0.817222582 | GRn | FALSE | 10/19/2012 |
| 11 | stata | 2 | 0.949287754 | xgc | TRUE | 1/18/2003 |
| 12 | stata | 5 | -1.580841322 | Y1D | TRUE | 6/3/2011 |
| 13 | r | 14 | -1.671303816 | JCP | FALSE | 5/15/1981 |
| 14 | r | 7 | 0.904181025 | Rct | TRUE | 7/24/1977 |
| 15 | stata | 10 | -1.198211174 | qJY | FALSE | 5/6/1982 |
| 16 | julia | 10 | -0.265808162 | 10s | FALSE | 3/18/1975 |
| 17 | r | 13 | -0.264955027 | 8Md | TRUE | 6/11/1974 |
| 18 | r | 4 | 0.518302149 | 4KW | FALSE | 9/12/1980 |
| 19 | r | 5 | -0.053620183 | 8An | FALSE | 4/17/2004 |
| 20 | r | 14 | -0.359197116 | F8Q | TRUE | 6/14/2005 |
| 21 | spss | 11 | -2.211875193 | AgS | TRUE | 4/11/1973 |
| 22 | stata | 4 | -1.718749471 | Zqr | FALSE | 2/20/1999 |
| 23 | python | 10 | 1.207878576 | tcC | FALSE | 4/18/2008 |
| 24 | stata | 11 | 0.548902226 | PFJ | TRUE | 9/20/1994 |
| 25 | stata | 6 | 1.479125922 | 7a7 | FALSE | 3/2/1989 |
| 26 | python | 10 | -0.437245299 | r32 | TRUE | 6/7/1997 |
| 27 | sas | 14 | 0.404746106 | 6NJ | TRUE | 9/23/2013 |
| 28 | stata | 8 | 2.206741458 | Ive | TRUE | 5/26/2008 |
| 29 | spss | 12 | -0.470694096 | dPS | TRUE | 5/4/1983 |
| 30 | sas | 15 | -0.57169507 | yle | TRUE | 6/20/1979 |
SQL (uses aggregate in subquery but can be a stored query)
SELECT r.*, IIF(sub.MinID = r.ID,1, 0) AS Dup
FROM Random_Data r
LEFT JOIN
(
SELECT r.GROUP, MIN(r.ID) As MinID
FROM Random_Data r
GROUP BY r.Group
) sub
ON r.[Group] = sub.[GROUP]
Output (notice the first GROUP value is tagged 1, all else 0)
| ID | GROUP | INT | NUM | CHAR | BOOL | DATE | Dup |
|----|--------|-----|--------------|------|-------|------------|-----|
| 1 | r | 9 | 1.424490258 | B6z | TRUE | 7/4/1994 | 1 |
| 2 | stata | 10 | 2.591235683 | h7J | FALSE | 10/5/1971 | 1 |
| 3 | spss | 6 | 0.560461966 | Hrn | TRUE | 11/27/1990 | 1 |
| 4 | stata | 10 | -1.499272175 | eXL | FALSE | 4/17/2010 | 0 |
| 5 | stata | 15 | 1.470269177 | Vas | TRUE | 6/13/2010 | 0 |
| 6 | r | 14 | -0.072238898 | puP | TRUE | 4/1/1994 | 0 |
| 7 | julia | 2 | -1.370405263 | S2l | FALSE | 12/11/1999 | 1 |
| 8 | spss | 6 | -0.153684675 | mAw | FALSE | 7/28/1977 | 0 |
| 9 | spss | 10 | -0.861482674 | cxC | FALSE | 7/17/1994 | 0 |
| 10 | spss | 2 | -0.817222582 | GRn | FALSE | 10/19/2012 | 0 |
| 11 | stata | 2 | 0.949287754 | xgc | TRUE | 1/18/2003 | 0 |
| 12 | stata | 5 | -1.580841322 | Y1D | TRUE | 6/3/2011 | 0 |
| 13 | r | 14 | -1.671303816 | JCP | FALSE | 5/15/1981 | 0 |
| 14 | r | 7 | 0.904181025 | Rct | TRUE | 7/24/1977 | 0 |
| 15 | stata | 10 | -1.198211174 | qJY | FALSE | 5/6/1982 | 0 |
| 16 | julia | 10 | -0.265808162 | 10s | FALSE | 3/18/1975 | 0 |
| 17 | r | 13 | -0.264955027 | 8Md | TRUE | 6/11/1974 | 0 |
| 18 | r | 4 | 0.518302149 | 4KW | FALSE | 9/12/1980 | 0 |
| 19 | r | 5 | -0.053620183 | 8An | FALSE | 4/17/2004 | 0 |
| 20 | r | 14 | -0.359197116 | F8Q | TRUE | 6/14/2005 | 0 |
| 21 | spss | 11 | -2.211875193 | AgS | TRUE | 4/11/1973 | 0 |
| 22 | stata | 4 | -1.718749471 | Zqr | FALSE | 2/20/1999 | 0 |
| 23 | python | 10 | 1.207878576 | tcC | FALSE | 4/18/2008 | 1 |
| 24 | stata | 11 | 0.548902226 | PFJ | TRUE | 9/20/1994 | 0 |
| 25 | stata | 6 | 1.479125922 | 7a7 | FALSE | 3/2/1989 | 0 |
| 26 | python | 10 | -0.437245299 | r32 | TRUE | 6/7/1997 | 0 |
| 27 | sas | 14 | 0.404746106 | 6NJ | TRUE | 9/23/2013 | 1 |
| 28 | stata | 8 | 2.206741458 | Ive | TRUE | 5/26/2008 | 0 |
| 29 | spss | 12 | -0.470694096 | dPS | TRUE | 5/4/1983 | 0 |
| 30 | sas | 15 | -0.57169507 | yle | TRUE | 6/20/1979 | 0 |

How to reference cell in formula where result met condition

Is there a way to write a formula for Variation so that it always relates to the lastest cell where the Variation was greater than a threshold?
In the following table the denominator of the percentage changes if the absolute value of Variation is greater than 10%. The formulas were changed manually by me.
------------------------------------------
| Row | Value | Variation| Formula |
------------------------------------------
| 1 | 1,1608 | 0,0% | A2/ A$2 - 1 |
| 2 | 1,1208 | -3,4% | A3/ A$2 - 1 |
| 3 | 1,0883 | -6,2% | A4/ A$2 - 1 |
| 4 | 1,0704 | -7,8% | A5/ A$2 - 1 |
| 5 | 1,0628 | -8,4% | A6/ A$2 - 1 |
| 6 | 1,0378 | -10,6% | A7/ A$2 - 1 | <---- Abs. Variation > 10 %
| 7 | 1,0353 | -0,2% | A8/ A$7 - 1 | <---- Change denominator
| 8 | 1,0604 | 2,2% | A9/ A$7 - 1 |
| 9 | 1,0501 | 1,2% | A10/ A$7 - 1 |
| 10 | 1,0706 | 3,2% | A11/ A$7 - 1 |
| 11 | 1,0338 | -0,4% | A12/ A$7 - 1 |
| 12 | 1,0110 | -2,6% | A13/ A$7 - 1 |
| 13 | 1,0137 | -2,3% | A14/ A$7 - 1 |
| 14 | 0,9834 | -5,2% | A15/ A$7 - 1 |
| 15 | 0,9643 | -7,1% | A16/ A$7 - 1 |
| 16 | 0,9470 | -8,7% | A17/ A$7 - 1 |
| 17 | 0,9060 | -12,7% | A18/ A$7 - 1 | <---- Abs. Variation > 10 %
| 18 | 0,9492 | 4,8% | A19/A$18 - 1 | <---- Change denominator
| 19 | 0,9397 | 3,7% | A20/A$18 - 1 |
| 20 | 0,9041 | -0,2% | A21/A$18 - 1 |
------------------------------------------
Is it possible to write a formula where the denominator changes on a given condition?
All my attempts with Array formulas, MATCH, AGGREGATE etc. went to nowhere.
Here is another way:
Place zero in E2.
In E3:
=IF(E2<-0.1,B3/B2-1,B3*(E2+1)/B2-1)
So what I'm trying to do is to work out the denominator from the previous row. So
E2=B2/denominator-1
Re-arranging you get
Denominator=B2/(E2+1)
So in the regular case you divide by this denominator, otherwise you divide by B2.
If it's possible to add another column to your data, it is very possible to do this with one IF statement. Your formula for the formula row would be:
=A2/(E1-1)
Column E formula (starting at E2) would be:
=IF(ABS(C2)>10, A2, E1)
Where E1 would be:
=A2
since that is what you have by default in your first formula.

Show text as value Power Pivot using DAX formula

Is there a way by using a DAX measure to create the column which contain text values instead of the numeric sum/count that it will automatically give?
In the example below the first name will appear as a value (in the first table) instead of their name as in the second.
Data table:
+----+------------+------------+---------------+-------+-------+
| id | first_name | last_name | currency | Sales | Stock |
+----+------------+------------+---------------+-------+-------+
| 1 | Giovanna | Christon | Peso | 10 | 12 |
| 2 | Roderich | MacMorland | Peso | 8 | 10 |
| 3 | Bond | Arkcoll | Yuan Renminbi | 4 | 6 |
| 1 | Giovanna | Christon | Peso | 11 | 13 |
| 2 | Roderich | MacMorland | Peso | 9 | 11 |
| 3 | Bond | Arkcoll | Yuan Renminbi | 5 | 7 |
| 1 | Giovanna | Christon | Peso | 15 | 17 |
| 2 | Roderich | MacMorland | Peso | 10 | 12 |
| 3 | Bond | Arkcoll | Yuan Renminbi | 6 | 8 |
| 1 | Giovanna | Christon | Peso | 17 | 19 |
| 2 | Roderich | MacMorland | Peso | 11 | 13 |
| 3 | Bond | Arkcoll | Yuan Renminbi | 7 | 9 |
+----+------------+------------+---------------+-------+-------+
No DAX needed. You should put the first_name field on Rows and not on Values. Select Tabular View for the Report Layout. Like this:
After some search I found 4 ways.
measure 1 (will return blank if values differ):
=IF(COUNTROWS(VALUES(Table1[first_name])) > 1, BLANK(), VALUES(Table1[first_name]))
measure 2 (will return blank if values differ):
=CALCULATE(
VALUES(Table1[first_name]),
FILTER(Table1,
COUNTROWS(VALUES(Table1[first_name]))=1))
measure 3 (will show every single text value), thanks # Rory:
=CONCATENATEX(Table1,[first_name]," ")
For very large dataset this concatenate seems to work better:
=CALCULATE(CONCATENATEX(VALUES(Table1[first_name]),Table1[first_name]," "))
Results:

values of new table changing dynamically with the input of the initial table

I have a question in Excel and need your help!
I can do this if it was static problem but I need my end result to adjust to my input in the table because the year sometimes the year finishes sooner or later or is different or the number of activities vary.
Initial table (It always go in chronological order and it only has 1 or nothing. (I put 0 there because I wanted to put space and didn't know how to do it. also Number of activities may vary)
+----------------+------+------+------+------+------+------+
| year | 2016 | 2016 | 2016 | 2017 | 2017 | 2017 |
+----------------+------+------+------+------+------+------+
| month calendar | 10 | 11 | 12 | 1 | 2 | 3 |
| month project | 1 | 2 | 3 | 4 | 5 | 6 |
| Activity 1 | 1 | 1 | 0 | 0 | 0 | 0 |
| Activity 2 | 0 | 0 | 1 | 1 | 1 | 1 |
| Activity 3 | 0 | 0 | 0 | 1 | 1 | 0 |
| Activity 4 | 0 | 1 | 1 | 0 | 0 | 0 |
| Activity 5 | 1 | 1 | 1 | 1 | 1 | 1 |
+----------------+------+------+------+------+------+------+
I want in another sheet
+---------------+------------+------------+------------+------------+
| Activity year | 2016 | | | |
| | | | | |
| Activity 1 | Activity 2 | Activity 3 | Activity 4 | Activity 5 |
| 25,0% | 12,5% | 0,0% | 25,0% | 37,5% |
| | | | | |
| Activity year | 2017 | | | |
| | | | | |
| Activity 1 | Activity 2 | Activity 3 | Activity 4 | Activity 5 |
| 0,0% | 37,5% | 25,0% | 0,0% | 37,5% |
+---------------+------------+------------+------------+------------+
Now imagine that in the "another sheet" I have nothing and I want the result to adjust to the initial table. How can I do this?
Sorry for bad editing but I can't do better.
Any help is good thank you, I answer any question you might have.
I'm going to add something that is related to this and I also need. I need a formula to get the number of different activities per year that happen at least once for my calculations later. In this case it would be a formula for 2016 which the result is 4 and 2017 is 3.

Resources