I have a MonoTouch.Dialog with UIViewElements that contain a UIWebView. This is a messaging type app and messages are constantly being added. I can ealisy get over 1000 UIViewElements and there is no limit. Obviously memory is going to be an issue.
What is the best way to only keep the last 100 rows?
I need a first in first out type setup. I can manually remove the Elements but do not know if there is a better way.
As far as I know, you do have to do this manually. I used TweetStation as an example.
Though this is not the latest app you can get (2010), it does cover a lot of the basics you need to understand.
Here is more information on building custom cells, if you do not know already.
The core part of managing your list of elements is to override the CreateSizingSource-method on your inheriting DialogViewController-class and creating your own SizingSource as shown in this part of TweetStation.
I hope this will get you started.
Related
Does someone know if it's possible to open multiple positions with only a single data feed? I am trying to do a second buy whilst in a position, which doesn't seem to be possible.
Nobody seems to adress this issue. Does anyone have any experience with Backtrader and have any input?
If you are just trying to buy more stock to add to your position, then yes, you should be able to do this and if you cannot recheck your strategy code in next.
If you are trying to track two separate positions of the same data...
One cannot have two separate positions in the same data feed. You may trade additional positions if you like but they will be combined in Backtrader. Even if you use two strategies you will still have one combined broker.
The reason for this is to simulate as near as possible real world conditions. If you have a brokerage account you most likely would have just one postion. (I know there are exceptions)
One solution would be to track your trading manually in a dictionary trades that result from different signals/sub-strategies. It's a bit more tedious to develop but very doable.
Our plugin maintains some instance parameter values across many elements, including those in groups.
Occasionally the end users will introduce data that activates an unused Category,
so we have to update the document parameter bindings, to include those categories. However, when we call
doc.ParameterBindings.ReInsert()
our existing parameter values inside groups are lost, because our VariesAcrossGroups flag is toggled back to false?
How did Revit intend this to work - are we supposed to use this in a different way, to not trigger this problem?
ReInsert() expects a base Definition argument, and would usualy get an ExternalDefinition supplied.
To learn, I instead tried to scan through the definition-keys of existing bindings and match those.
This way, I got the document's InternalDefinition, and tried calling Reinsert with that instead
(my hope was, that since its existing InternalDefinition DID include VariesAcrossGroups=true, this would help). Alas, Reinsert doesn't seem to care.
The problem, as you might guess, is that after VariesAcrossGroups=False, a lot of my instance parameters have collapsed into each other, so they all hold identical values. Given that they are IDs, this is less than ideal.
My current (intended) solution is to instead grab a backup of all existing parameter values BEFORE I update the bindings, then after the binding-update and variesAcrossGroups back to true, then inspect all values and re-assign all parameter-values that have been broken. But as you may surmise, this is less than ideal - it will be horribly slow for the users to use our plugin, and frankly it seems like something the revitAPI should take care of, not the plugin developer.
Are we using this the wrong way?
One approach I have considered, is to bind every possibly category I can think of, up front and once only. But I'm not sure that is possible. Categories in themselves are also difficult to work with, as you can only create them indirectly, by using your Project-Document as a factory (i.e. you cannot create a category yourself, you can only indirectly ask the Document to - maybe! - create a category for you, that you request). Because of this, I don't think you can bind for all categories up front - some categories only become available in the document, AFTER you have included a given family/type in your project.
To sum it up: First, I
doc.ParameterBindings.ReInsert()
my binding, with the updated categories. Then, I call
InternalDefinition.SetAllowVaryBetweenGroups()
(after having determined IDEF.VariesAcrossGroups has reverted back to false.)
I am interested to hear the best way to do this, without destroying the client's existing data.
Thank you very much in advance.
(I'm not sure I will accept my own answer).
My answer is just, that you can survive-circumvent this problem,
by scanning the entire revit database for your existing parmater values, before you update the document bindings.
Afterwards, you reset VariesAcrossGroups back to its lost value.
Then, you iterate through your collected parameters, and verify which ones have lost their original value, and reset them back to their intended value.
One trick that speeds this up a bit, is that you can check Element.GroupId <> -1. That is, those elements that are group members.
You only need to track elements which are group members, as it's precisely those that are affected by this Revit bug.
A further tip is, that you should not only watch out for parameter-values that have lost their original value. You must also watch out for parameter-values that have accidentally GOTTEN a value, but which should be left un-set.
I just use FilteredElementCollector with WhereElementIsNotElementType().
Performance-wise, it is of course horrible to do all this,
but given how Revit behaves, I see no other solution if you have to ship to your clients.
Our evolution of using DevOps is continuing (slowly but surely). One thing we've noticed is that some people are trying to but excessive estimates in for their time, but what we really want to be encouraging is for people to be breaking work down into multiple tasks.
Is there a way that we can set our DevOps work items to only accept a maximum value? I've had a look at the 'rules' and there doesn't seem to be anything there to let us do this, and because it's an out of the box field I don't think we can put a value limit against it.
I suppose what I want to understand is whether it would be possible to do this in some way? Could I do something with the existing 'Original Estimate' field or would I have to create a new custom field to have any chance of preventing people from putting in 100 hours for something that's actually more like 2?
If you are also using Boards, you could highlight work items where the original estimate is higher than a certain value. This would not prevent setting these values, but rather encourage the users to put in lower values.
https://learn.microsoft.com/en-us/azure/devops/boards/boards/customize-cards?view=azure-devops
Beware that this might not really help the underlying issue: People must be convinced of the benefits of splitting up tasks, otherwise they will just work around the tooling. Like always putting in the maximum value or not putting in the actual work hours.
Is there a way that we can set our DevOps work items to only accept a
maximum value?
I am afraid that setting the value limit for the Original Estimates field is currently not supported.
As workaround, you could need to create a custom field of type Picklist, and then specify the available values in the picklist.
You could add your request for this feature on our UserVoice site , which is our main forum for product suggestions.After suggest raised, you can vote and add your comments for this feedback. The product team would provide the updates if they view it.
I would like to implement a GUI handling a huge number of rows and I need to use GTK in Linux.
I started having a look at GTKTreeView with lists but I don't think that adding millions of lines directly to that widget will help in having a GUI that doesn't slow the application.
Do you know whether there is a GTK widget already in place for this problem or do I have to handle my self the window frame that that must display those lines? Eventually I would write the data directly using GtkDrawingArea (essentially writing a new widget).
Any suggestion about any GTK topic or project I can look as starting point for my research?
As suggested in the comments, you can use the Cell Data Func, and get the displayed data under contro. But I have another idea: Millions of lines are much much more than any amount of information a human user can see and understand. So maybe a better, more usable and user-friendly solution, is to diaplay the data in a way the users can more easily navigate in it.
Imagine opening a huge hierarchy, scrolling down, and forgetting what were the top-level items you opened.
Example for a possible solution: Have a combo box which allows to choose some filter or category, and this can reduce the amount of data to a reasonable amount the user can more easily navigate and make a mental model of it if necessary.
NOTE: As far as I know, GtkTreeView doesn't support sorting/filtering and drag-n-drop at the same time, so if you want to use both features, I suggest you use the existing drag-n-drop functionality (otherwise very complicated to implement by hand) and implement your own sorting/filtering.
I'm creating a Book store in Magento and am having trouble figuring out the best way to handle the Authors of a Book (which would be the product).
What I currently have is an Attribute called "authors" which is multi-select and a thousand [test] values. It's still manageable but does get a little slow when editing a product. Also, when adding an option/value to the authors attribute itself, a huge list is rendered in the HTML making this an inefficient solution.
Is there another approach I should take?
Is it possible to create an Author object (entity type?) which is associated to a product through a join table? If yes, can someone give me an explanation about how that is done or point me to some good documentation?
If I'd take the Author object approach, could that still be used in the layered navigation?
How would I show the list of all books for a single author?
Thanks in advance!
PS: I am aware of extensions like Improved Navigation but AFAIK it adds something like attributes to attributes themselves which is not what I'm looking for.
For Googlers: The same would apply for Artists of a music site or manufacturers.
If you create an author entity type, you'll just increase your work trying to add it to layered navigation, and I don't see a reason why it would be faster.
Your approach seems the best fit to the problem, given the way Magento is set up. How are you going to display 1,000 (which presumably pales in comparison to the actual list) authors in layered navigation?
Depending on the requirements, you could go the route of denormalizing the field and accepting text for it. That would still allow you to display it, search based on it, etc, but would eliminate the need to render every possible artist to manipulate the list. You could add a little code around selecting the proper artist (basically add an AJAX autocomplete to the backend field) to minimize typos as well.
Alternatively, you could write a simple utility to add a new artist to the system without some of the overhead of Magento's loading the list. To be honest, though, it seems that the lag that this has the potential to create on the frontend will probably outweigh the backend trouble.
Hope that helps!
Thanks,
Joe