I've long sought a method for using INDEX MATCH in Excel to return the absolute closest number in an array without reorganizing my data (since MATCH requires lookup_array to be in descending order to find the closest value greater than lookup_value, but ascending order to find the closest value less than lookup_value).
I found the answer in this post. XOR LX's solution:
=INDEX(B4:B10,MATCH(TRUE,INDEX(ABS(A4:A10-B1)=MIN(INDEX(ABS(A4:A10-B1),,)),,),0))
worked perfectly for me, but I don't know why. I can rationalize most of it but I can't figure out this part
INDEX(ABS(A4:A10-B1)=MIN(INDEX(ABS(A4:A10-B1),,))
Can anyone please explain this part?
I guess it makes sense for me to explain it, then!
Actually, it didn't help that I was employing a technique which is designed to circumvent having to enter a formula as an array formula, i.e. with CSE. Although that could be considered a plus by some accounts, I think I was wrong to employ it here, and probably wouldn't do so again.
The technique involves inserting extra INDEX functions at appropriate places within the formula. This forces the other functions, which without array-entry would normally act upon only the first element of any array passed to them, to instead operate over all elements within that array.
However, whilst inserting a single INDEX function for the purpose of avoiding CSE is, in my opinion, perfectly fine, I think when it gets to the point where you're using two or three (or even more) such coercions, then you should probably re-think whether it's worth it all (the few tests that I've done suggest that, in many cases, performance is actually worse off in the non-array, INDEX-heavy version than the equivalent CSE set-up). Besides, the use of array formulas is something to be encouraged, not something to be avoided.
Sorry for the ramble, but it's kind of to the point actually since, if I had given you the array version, then you may well not have come back looking for an explanation, since that version would look like:
=INDEX(B4:B10,MATCH(TRUE,ABS(A4:A10-B1)=MIN(ABS(A4:A10-B1)),0))
which is objectively far easier syntactically to understand than the other version.
Let me know if that helps and/or you still want me to go through a breakdown of either solution, which I'd be happy to do.
You may also find the following links of interest (I hope that I'm not breaking any of this site's rules by posting these):
https://excelxor.com/2014/09/01/index-an-alternative-to-array-cse-formulas
https://excelxor.com/2014/08/18/index-returning-entire-rowscolumns
Regards
Related
Our plugin maintains some instance parameter values across many elements, including those in groups.
Occasionally the end users will introduce data that activates an unused Category,
so we have to update the document parameter bindings, to include those categories. However, when we call
doc.ParameterBindings.ReInsert()
our existing parameter values inside groups are lost, because our VariesAcrossGroups flag is toggled back to false?
How did Revit intend this to work - are we supposed to use this in a different way, to not trigger this problem?
ReInsert() expects a base Definition argument, and would usualy get an ExternalDefinition supplied.
To learn, I instead tried to scan through the definition-keys of existing bindings and match those.
This way, I got the document's InternalDefinition, and tried calling Reinsert with that instead
(my hope was, that since its existing InternalDefinition DID include VariesAcrossGroups=true, this would help). Alas, Reinsert doesn't seem to care.
The problem, as you might guess, is that after VariesAcrossGroups=False, a lot of my instance parameters have collapsed into each other, so they all hold identical values. Given that they are IDs, this is less than ideal.
My current (intended) solution is to instead grab a backup of all existing parameter values BEFORE I update the bindings, then after the binding-update and variesAcrossGroups back to true, then inspect all values and re-assign all parameter-values that have been broken. But as you may surmise, this is less than ideal - it will be horribly slow for the users to use our plugin, and frankly it seems like something the revitAPI should take care of, not the plugin developer.
Are we using this the wrong way?
One approach I have considered, is to bind every possibly category I can think of, up front and once only. But I'm not sure that is possible. Categories in themselves are also difficult to work with, as you can only create them indirectly, by using your Project-Document as a factory (i.e. you cannot create a category yourself, you can only indirectly ask the Document to - maybe! - create a category for you, that you request). Because of this, I don't think you can bind for all categories up front - some categories only become available in the document, AFTER you have included a given family/type in your project.
To sum it up: First, I
doc.ParameterBindings.ReInsert()
my binding, with the updated categories. Then, I call
InternalDefinition.SetAllowVaryBetweenGroups()
(after having determined IDEF.VariesAcrossGroups has reverted back to false.)
I am interested to hear the best way to do this, without destroying the client's existing data.
Thank you very much in advance.
(I'm not sure I will accept my own answer).
My answer is just, that you can survive-circumvent this problem,
by scanning the entire revit database for your existing parmater values, before you update the document bindings.
Afterwards, you reset VariesAcrossGroups back to its lost value.
Then, you iterate through your collected parameters, and verify which ones have lost their original value, and reset them back to their intended value.
One trick that speeds this up a bit, is that you can check Element.GroupId <> -1. That is, those elements that are group members.
You only need to track elements which are group members, as it's precisely those that are affected by this Revit bug.
A further tip is, that you should not only watch out for parameter-values that have lost their original value. You must also watch out for parameter-values that have accidentally GOTTEN a value, but which should be left un-set.
I just use FilteredElementCollector with WhereElementIsNotElementType().
Performance-wise, it is of course horrible to do all this,
but given how Revit behaves, I see no other solution if you have to ship to your clients.
I've been having some fun creating a rather extensive inventory in Google Sheets for my collection of trading cards. I buy the majority of my collectibles in lots meaning that I pay a total of X dollars for Y number of cards of different values (as opposed to buying each card individually).
In my spreadsheet I have a "Purchase Price" column where I enter in the price I paid for each card. If I buy 1 lot of 10 cards, to find the value of each of those cards you would just divide the cost of the lot by the number of cards in the lot. So if I purchased 1 lot of 10 cards for a total of $100, the Purchase Price of each card would equal $10. Simple enough right?
Well, that would be if you were OK with entering the rare, uncommon, and common cards in the lot with having the same exact purchase price even though their real market values would all be different. So, what I did was create a formula that automatically adjusts the purchase price for each card that's part of a lot based on its rarity so it's at least closer in accuracy to the actual market value of the card.
Here is the formula:
=IFS(B2="C",D2*$B$15*G2/((D2*$B$15)+(E2*$B$16)+(F2*$B$17))/D2,
B2="U",D2*$B$16*G2/((D2*$B$15)+(E2*$B$16)+(F2*$B$17))/D2,
B2="R",D2*$B$17*G2/((D2*$B$15)+(E2*$B$16)+(F2*$B$17))/D2)
Not sure if that means much to anyone, so here's a link to an example spreadsheet of the formula in action below.
And if you don't care to check that out, here's a screenshot:
The problem:
So the formula works exactly how I want it to work EXCEPT when there are 0 commons in a lot. When that happens I get a #DIV/0! error saying that "Function DIVIDE parameter 2 cannot be zero." I understand why this is happening since it doesn't like to divide by 0 in the first line, but what I don't understand is how to fix it.
How can I fix the DIV error, or is there a better way to do this, perhaps an alternative formula or approach? I am not a programmer and somewhat of a beginner at Excel.
Two suggestions.
Embed each division in an IFERROR() function as shown below. This function will return zero instead of an error. You can substitute another calculation for that. In fact, depending upon which level you introduce the IFERROR at (embracing just one of the three calculations or all three) you might choose to embed the IFS in another IFS that tests for zeroes. Once you have no more divisions by zero there would be no more need for IFERROR. So, it becomes a question of formula efficiency.
=IFERROR(D2*$B$15*G2/((D2*$B$15)+(E2*$B$16)+(F2*$B$17))/D2,0)
Forget about all of this and seek a commercially logical solution. The logic says that you never buy a lot unless it contains some items you want, and the seller never has a lot that doesn't contain rubbish. In the end you get inundated with commons, meaning you have more of them than you can ever hope to sell. So, what's their real, commercial value? Valuate your rare and uncommon cards individually and all the baggage not at all. You will find the outcome more realistic both for the Commons and the Rare. BTW, that's what they do with stamp or coin collections.
I realise this is more a comment than an answer, but it's too large to put it as a comment:
Your formula is unreadable, as you can see:
=IFS(B2="C",D2*$B$15*G2/((D2*$B$15)+(E2*$B$16)+(F2*$B$17))/D2,
B2="U",D2*$B$16*G2/((D2*$B$15)+(E2*$B$16)+(F2*$B$17))/D2,
B2="R",D2*$B$17*G2/((D2*$B$15)+(E2*$B$16)+(F2*$B$17))/D2)
First I'd advise you to create a new column (you might always hide it), I:I, which contains following formula (for I2):
=B2*G2/((D2*const_weigth_common)+(E2*const_weigth_uncommon)+(F2*const_weigth_rare))/D2
(And you give this a meaningful header name)
As far as names for $B$15:$B$17 are concerned, do something like:
$B$15 : const_weigth_common
$B$16 : const_weigth_uncommon
$B$17 : const_weigth_rare
(You do know how to use names in Excel?)
Like this, your formula becomes:
=IFS(B2="C",I2 * const_weigth_common,
B2="U",I2 * const_weigth_uncommon,
B2="R",I2 * const_weigth_rare);
As far as your error is concerned: as mentioned in another answer, this might be tackled using the =IF() formula, so I2 becomes:
=IF(D2<>0;...;_ERROR_VALUE); // up to you how to change your error value
Like this, your formulas become much clearer and it will be easier to solve possible problems.
I Ain't No Math-A-Magician
But I can help you with this...
The way I see it, there are three schools of thought and you need to figure out which one is yours:
The programmer - trapping #div/0 errors all day
The purist - there is only one answer, and it is undefined
The pragmatic - a graph shows results approaching a limit
I think the programmer is either dangerous or ineffectual, or dangerously ineffectual. Sure he can trap there error but what exactly does that do? I'll tell you exactly what that does, it puts lipstick on a pig. It's literally replacing one string, "#div/0!", with a different more aesthetic string. Or he can play second fiddle to the devil and publicly killing the one bug everyone knew how to defend at defend sgainst and creating another?
I think the purist is right about one thing, the answer is what it is and it can't be a anything else; but he's also wrong. A precisely known theoretical answer may very well be undefined, but where in the real world has anyone ever seen division by zero? It's a mathematical construct like infinity, we can safely ignore it. Don't believe me? What happens when you dvide the sun by zero? Go ahead, I'll wait for your answer.
I think these things because I am grounded in pragmatism One string is not better than the other if they both symbolize an attempt to divide by zero. I prefer knowing who my enemies are so I my keep them in front of it me. Nature truly abhorss the undefined and is only slightly displeased with a vacuum.
I use utl_match.jaro_winkler in order to compare company names. In most cases it works fine, but sometimes I get pretty weird results.
This for example returns 0.62:
utl_match.jaro_winkler('ГОРОДСКАЯ КЛИНИЧЕСКАЯ БОЛЬНИЦА 18','ДИНА');
Those are absolutely different names both by length and symbols! How could it be 62%?
Another example:
SELECT utl_match.jaro_winkler('ООО МЕГИ', 'МЕГИ')
This returns 0! Despite the fact that those are very similar strings.
It feels like I should use something more complicated and advanced than just upper() and utl_match.jaro_winkler(). But I have no idea what exactly.
What would you recommend? What are best practices of comparing two strings? Where I can read about it?
We have been working on creating scoring profiles for our search. We need a way to "bury" or give "negative" boosts to some fields in case of types of scoring function "Magnitude", "Freshness", "Tags". We noticed that we cannot add a negative value for boost. Is there any other way to achieve this kind of behavior (burying results based the field)
We cannot use $OrderBy because it takes precedence over the scoring profile.
Please advise. Thanks!
you should only set positive boosting values, as described [here][1]. There may be a few things you could do. The first thing I would try is to set the weight to 0 for the fields that you do not care about. In that case, they will simply not impact the relevance.
Another option: If you know that a field should not impact relevance you could simply make that field not 'searchable'. That said, this is a property of the index definition -- so you would need to create a different index for each combination of non-searchable fields.
Depending on your scenario, you could also make a field filterable, and filter based on that field. Something like $filer=Freshenss eq 'Really Fresh'. See this link for more information on using filters.
thanks!
-Luis Cabrera
For "Magnitude", "Freshness", you can set the set the range start as higher value and range end as lower value. Would this be considered as negative impact?
Like this:
I resolved that scenario by creating negative values (using an INT field) for the field we wanted to bury. That gave us the negative boost we needed.
I used a similar technique for Date "Freshness" too, where we counted the days from some event and the higher the number the less fresh the date is and used a "magnitude" function for it.
Thanks!
I have thought the about the need for this too.
One idea I have, but haven't tried, is to do a second search on just the negative keywords. That search result will have scores as well.
Then use those scores in a function to reduce first search result scores.
(yes, it would be nicer if it could be do as part of ACS)
I just joined recently and am really excited to dive into the world of programming. There is still a ton of stuff I don't know, but I'm very proud of myself because I feel like I'm making headway into programming, whereas I used to have a mental block before. I've always been an infrastructure type of gal. But anyway --
I am creating an excel spreadsheet for my new budget. Here is a screencap of my problem (According to rules, new user can't attach images):
http://i66.tinypic.com/hx53zm.png
So this is what I want it to do, logically speaking: Stay blank (B38) until something is entered. Do (B7-B14-B36) if all the fields have something in them. Otherwise, just subtract whatever's in either B14 and/or B36 from B7.
I'm sure it's really simple -- I just lack the knowledge since I'm new. I have been playing around with this for a few days and searching on Google, and I can't figure out how to make it work for my spreadsheet. I have tried the CountA, Count, If, Isblank statements... and just can't get it to go.
This isn't really important to anything in my life, it's just something I'm making for myself to keep my financials in order -- AND to give me practice with some coding.
Thank you for any help you can give me!
Chris
If I understand you correctly, you will want to add the following to B38
=IF(B7 <> ""; B7-B14-B36; "")
Depending on your version of Excel, you may need to replace the ; with ,