I am use adehabitatHR to calculate HR's utilizing the LoCoH method. I am attempting to find the optimal settings, but and having trouble because at a number of settings I am getting the following:
y<-LoCoH.k(mastermatrix, k=6, unin="m", unout="m", duplicates="random")
Error in rgeos::createPolygonsComment(oobj) :
rgeos_PolyCreateComment: orphaned hole, cannot find containing polygon for hole at index 19
I generally understand what this means, however all of the advice for orphan holes and rgeos are in situations where the person is coding the unions themselves rather than using another package that feeds through.
I can get HR's at a number of different settings so I know my data is sufficient, but I want to make sure that I am optimizing my settings. Any advice as to what factors of my data is causing this would be helpful.
Do I need to do further QAQC on my location data?
Are there possibly settings I am missing?
Related
Does someone know if it's possible to open multiple positions with only a single data feed? I am trying to do a second buy whilst in a position, which doesn't seem to be possible.
Nobody seems to adress this issue. Does anyone have any experience with Backtrader and have any input?
If you are just trying to buy more stock to add to your position, then yes, you should be able to do this and if you cannot recheck your strategy code in next.
If you are trying to track two separate positions of the same data...
One cannot have two separate positions in the same data feed. You may trade additional positions if you like but they will be combined in Backtrader. Even if you use two strategies you will still have one combined broker.
The reason for this is to simulate as near as possible real world conditions. If you have a brokerage account you most likely would have just one postion. (I know there are exceptions)
One solution would be to track your trading manually in a dictionary trades that result from different signals/sub-strategies. It's a bit more tedious to develop but very doable.
I am researching a problem that is pretty unique.
Imagine a roadside assistance company that wants to dynamically route its vehicles. Hence for each packet of new incidents wants to create routes that will satisfy them, according to some constraints (time constraints, road accessibility, vehicle - incident matching).
The company has an heterogeneous fleet of vehicle (motorbikes for easy cases, up to tow trucks for the hard cases) and each incident states it's uniqueness (we know if it wants just fuel, or needs towing).
There is no depot, only the vehicles roaming on the streets.
The objective is to dynamically create routes on the way, having in mind the minimization of time and the total traveled distance.
Have you ever met such a problem? Do you have any idea in which VRP variant it belongs?
I have seen two previous questions but unfortunately they don't fit with my problem.
The respected optaplanner - VRP but with no depot and Does optaplanner out of box support VRP with multiple trips and no depot, which are both open VRPs.
Unfortunately I don't have code right now, as I am still modelling the way I will approach this problem.
I am really sorry for creating a suggestion question and not a real one.
Thank you so much in advance.
It's a rich dynamic/realtime vehicle routing problem. You won't find an exact name for your problem, as when VRPs get too complex they don't fit inside any of the standard categories.
It's clearly a dynamic/realtime problem (the terms are used interchangeably) as you would typically only find out about roadside breakdowns at short notice.
Sometimes you're servicing a broken down car, which would be a single stop (so a vehicle routing problem). Sometimes you're towing a car, which would be a pick-up delivery problem. So you have a mix of both together.
You would want to get to the broken down vehicles ASAP and some would need fixing sooner than others (think a car broken down in a dangerous position on a motorway). You would therefore need soft time windows so you can penalise lateness instead of the standard hard time windows supported in most VRP formulations.
Also for you to be able to scale to larger problems, you need an incremental optimiser that can restart from the previous (possibly now infeasible) solution when new jobs are added, vehicle positions are changed etc. This isn't supported out of the box in the open source solvers I know of.
We developed a commercial engine which does the above. We started off using the jsprit library, which supports mixing single stop and pickup delivery problems together. We later had to replace jsprit due to the amount of code we had to override to get it running happily for realtime problems, however jsprit may still prove a useful starting point for you. We discuss some of the early technical obstacles we had to overcome in getting jsprit to handle realtime problems in this white paper.
Disclaimer: I know that it is bad to not use an 'ORDER BY' in SQL when sorted data is required.
I am currently supporting a Pro*C program which is having a wierd-problem.
One of the possible causes of the wierd-problem may be that the original developers (from a long time ago) have not used ORDER BY in their SQL even though the program logic depends on it!
The program has been working fine all these years and started showing problems only recently.
We are trying to pin the wierd-problem to the ORDER BY mistake (there are other cause candidates like a recent port from Solaris to Linux which took place).
What shadowy things on the database end should we look at that may have changed the old sort order? Things like data files etc?
Anybody have any experience with Pro*C on Solaris magically sorting the result-set?
Thanks!
Since you know that the program cares about the order in which results are returned and you know that the query that is submitted is missing an ORDER BY clause, is there a reason that you don't just fix the problem rather than looking to try to figure out whether the actual order of results may have changed? If you fix the known ORDER BY problem and the "weird problem" you have disappears, that would provide some pretty good evidence that the "weird problem" is, in fact, caused by the missing ORDER BY.
Unfortunately, there are lots of things that might have caused the order of results to change many of which may be impossible to track down. The most obvious cause would be a change in the execution plan. That, in turn, may have been caused either because statistics changed or because statistics didn't change enough or because of a patch or because of an initialization parameter change or because of a client configuration change among other things. If you are licensed to use the AWR (Automatic Workload Repository), you might be able to find evidence that the plan has changed by looking to see if there are multiple PLAN_HASH_VALUE values for the SQL_ID in DBA_HIST_SQLSTAT over different days. If there are, you'd still have to try to figure out whether the different plans actually caused the results to be returned in a different order. Beyond a query plan change, though, there are dozens of other possible causes. The physical order of data on disk may have changed because someone reorganized the table or because someone moved data files around on the disk or because the SAN automatically rebalanced something by moving data around. Some data may have been cached (or may not have been cached) in general in the past that is now cached. An Oracle patch may have been applied.
I suggest that change your physical table with view and make your required order in that view.
example
TABLE_NOT_SORTED --> rename to --> PHYS_TABLE_NOT_SORTED
CREATE VIEW TABLE_NOT_SORTED
AS
SELECT * FROM PHYS_TABLE_NOT_SORTED
ORDER BY DESIRED_COLUMNS
For response to comment:
According to this question and Ask Tom's Answer, it seems that since Oracle does not guarantee a default sorting if you do not use "ORDER BY", they are free to change it. They are absolutely right of course. If you need sorting, use Order By.
Other than that we can not say anything about your code or default ordering.
I have a MonoTouch.Dialog with UIViewElements that contain a UIWebView. This is a messaging type app and messages are constantly being added. I can ealisy get over 1000 UIViewElements and there is no limit. Obviously memory is going to be an issue.
What is the best way to only keep the last 100 rows?
I need a first in first out type setup. I can manually remove the Elements but do not know if there is a better way.
As far as I know, you do have to do this manually. I used TweetStation as an example.
Though this is not the latest app you can get (2010), it does cover a lot of the basics you need to understand.
Here is more information on building custom cells, if you do not know already.
The core part of managing your list of elements is to override the CreateSizingSource-method on your inheriting DialogViewController-class and creating your own SizingSource as shown in this part of TweetStation.
I hope this will get you started.
I'm looking for a generic charting solution, ideally not a hosted one that provides the following features:
Charting a tuple of values where the values are:
1) A service identifier (e.g. CPU usage)
2) A client identifier within that service (e.g. server IP)
3) A value
4) A timestamp with millisecond/second resolution.
Optional:
I'd like to also extend the concept of a client identifier further, taking the above example further, I'd like to store statistics for each core separately, so, another identifier would be Core 1/Core 2..
Now, to make sure I'm clearly stating my problem, I don't want a utility that collects these statistics. I'd like something that stores them, but, this is also not mandatory, I can always store them in MySQL, or such.
What I'm looking for is something that takes values such as these, and charts them nicely, in a multitude of ways (timelines, motion, and the usual ones [pie, bar..]). Essentially, a nice visualization package that allows me to make use of all this data. I'd be collecting data from multiple services, multiple applications, and the datapoints will be of varying resolution. Some of the data will include multiple layers of nesting, some none. (For example, CPU would go down to Server IP, CPU#, whereas memory would only be Server IP, but would include a different identifier, i.e free/used/cached as the "secondary' identifier. Something like average request latency might not have a secondary identifier at all, in the case of ping). What I'm trying to get across is that having multiple layers of identifiers would be great. To add one final example of where multiple identifiers would be great: adding an extra identifier on top of ip/cpu#, namely, process name. I think the advantages of that are obvious.
For some applications, we might collect data at a very narrow scope, focusing on every aspect, in other cases, it might be a more general statistic. When stuff goes wrong, both come in useful, the first to quickly say "something just went wrong", and the second to say "why?".
Further, it would be a nice thing if the charting application threw out "bad" values, that is, if for some reason our monitoring program started to throw values of 300% CPU used on a single core for 10 seconds, it'd be nice if the charts themselves didn't reflect it in the long run. Some sort of smoothing, maybe? This could obviously be done at the data-layer though, so its not a requirement at all.
Finally, comparing two points in time, or comparing two different client identifiers of the same service etc without too much effort would be great.
I'm not partial to any specific language, although I'd prefer something in (one of the following) PHP, Python, C/C++, C#, as these are languages I'm familiar with. It doesn't have to be open source, it doesn't have to be a library, I'm open to using whatever fits my purpose the best.
More of a P.S than a requirement: I'd like to have pretty charts that are easy for non-technical people to understand, and act upon too (and like looking at!).
I'm open to clarifying, and, in advance, thanks for your time!
I am pretty sure that protovis meets all your requirements. But it has a bit of a learning curve. You are meant to learn by examples, and there are plenty to work from. It makes some pretty nice graphs by default. Every value can be a function, so you can do things like get rid of your "Bad" values.