Highest high for a specific time span - ta-lib

How can I get the highest high for a specific time span, e.g. from 08:00 - 10:00 am using a pyalogtrade or ta-lib indicator.
Of course I could write two if statements to check the time on the current bar, but I thought there must be a more elegant way to do it.

You could use technical.highlow.High: http://gbeced.github.io/pyalgotrade/docs/v0.17/html/technical.html#pyalgotrade.technical.highlow.High

Related

Traveling Salesman Alternate - How would one code it if the cities were all the same distances from each other?

First time asking question, apologies if incorrect.
What would be the best way to approach this problem (Similar to travelling salesman, but I'm not sure if it runs into the same issues).
You have a list of "tasks" at certain locations (Cities) and a group of "people" that can complete those tasks (Salesmen). This is structured over a day, where some tasks may need to be completed before a specific time and may require specific "tools" (Set number available). The difference is that the length between each location is the same in all circumstances, but they all have to return to the start. Therefore, rather than trying to minimise the distance travelled, instead you want to maximise the time each salesmen spends moving and stays at the initial staring node. This also gives you pre-defined requirements.
The program doesn't need to find an optimal solution, just an acceptable one (Greater than a certain value.) Would you just bash out each case? If so, what would be the best language to use for bashing out the solutions?
Thanks
EDIT - Just to confirm, the pre-requisite where all the cities are the same distance from each other is just for simplification of the problem, not reflective of real life.

How to simulate pumping well in SEAWAT using flopy with a salinity constrain?

I want to simulate a pumping well above seawater intrusion in SEAWAT using flopy and want to cease automaticly pumping when the salinity concentration in the well's cell reaches to a certain level, for example, 5% of relative salinity. In other words, I want the well extract only freshwater and when the the saltwater starts increasing in the well (due to up-coning), stop pumping. I really appreciate it if someone helps me to do this task.
I did this once. What you could do is write and run your model for every separate day (for example) and use the starting head and concentrations from the previous day as input for the current day. You can check whether the concentrations near your well exceed your threshold and then decide if your well is going to extract that day.

Know what timezone is closest to a given time in NodeJS

What is the best way to know what UTC timezone has, let's say, 7:00 AM right now. Or find the one that is closest.
One way would be to have an array of all the timezones (there are only 39, not a big number), loop through them, find the one with least difference.
But, I'm looking for a better solution. Is there?

How long will it take to audit 29k lines of Drupal code?

A client is asking how long does it take to audit the security of his Drupal module that is 29k lines long. Does anyone know at least what ballpark I should give him? His main concerns are file encryption and user permission.
Nope, not a damn clue :-)
However, whatever value you choose, may I suggest one thing?
Monitor your progress! Tell your client that your initial estimate is (for example) twenty-nine working days but that it depends on a great many factors outside your control.
Tell them you plan to mitigate risks of budget overrun by providing a daily snapshot of progress:
current number of lines audited in total [a].
days spent [b].
current "run rate" (number of lines per day, average) [c = a/b].
number of lines yet to be audited [d = 29,000 - a].
estimated days to completion [e = d / c].
Allow them to pull the plug at any time if the run rate is well below what you estimated.
This basic project management/reporting should give them the confidence that you know what you're doing, and will minimise their exposure considerably, to the point where they'll feel a lot more comfortable about taking you on.
Just on that last bullet point above, you may want to consider giving them a range (say +/-5% of the estimate), but don't get too clever about working out best and worst case based on your best and worst days to date. The power of averaging is that it gives you a "best" guess without having to fiddle too much with figures.
Typical estimates I've seen are that you can expect a developer to review 100-150 lines of code per hour. This is a very rough estimate, and it will vary greatly depending upon the nature of the code and the thoroughness of the review. Also, if you can review code for 8 hours a day, 5 days a week, straight, you're inhuman and amazing; for the rest of us, we need a change of activity to clear the brain.

Find UK PostCodes closest to other UK Post Codes by matching the Post Code String

Here is a question that has me awake for a number of days now. The only conclusion I came up so far is that Red Bull does not usually help coders.
I have a scenario in my application where I have a couple of jobs (1 to 50). The job has an address and I have the following properties of an address: Postcode, Latitude, and Longitude.
I have a table of workers also and they too have addresses. While the jobs or workers are created through screens, I use Google Map queries to make sure the provided Postcode is valid and is in UK so all the addresses are verified.
I am using a scheduler control to display some workers on y-axis and a timeline on x-axis. Every job has a date and can only move vertically on the scheduler on the job’s date. The user selects a number of jobs and they are displayed in a basket close to the scheduler. The user can then drag and drop job against workers. All this is manual so it works.
My task is to automate this so that the user does not do much except just verifying and allotting the jobs. Therefore, I have to automate the process.
Every worker has a property called WillingMaximumDistanceTravel which is an integer representing miles, the worker is willing to travel for a job.
Now here is the headache: I have over 1500 workers. I have a utility function that uses Newtonsoft’s Json Convert to de-serialize a stream of response from Google Maps. I need to feed it Postcode A and B.
I also plan to introduce a new table to DB to store the distance finds as Postcode A, Postcode B, and Distance. Therefore, if I find myself comparing the same postcodes again, I will just retrieve the result from DB instead and slowly and eventually, I would no longer require bothering Google anymore as this table would be very comprehensive.
I cannot use the simple Haversine formula, as Crow-fly path is not my requirement here. The pain in this is that it takes a lot of time to calculate. Some workers can travel over 10 miles while some vary from 15 to 80. I have to take the first job from the list and run it with every applicable worker o the system! I was wondering that the UK postcode has a pattern to it. If we sort a list of UK postcodes, can we rough-estimate, from the alphanumeric pattern, where will we hit a 100-mile mark, a 200-mile mark and so on?
If anyone is interested in the code, please drop a line and I will paste it.
(I work for Google, but I'm not speaking on behalf of Google. I have nothing to do with the maps API.)
I suspect this isn't a great situation for using the Google Maps API, simply because you're pushing so much data through. You really don't want to make that many requests, even if you could do so under the directions limits.
When I tackled something similar in a previous job, we bought into a locally-hosted maps API - but even that wasn't fast enough for this sort of work. We ended up precomputing the time to travel from the centroid of each postcode "area" (probably the wrong name for it, but the first part of the postcode followed by the first digit of the remainder, e.g. "SW1W 9" for "SW1W 9TQ") to every other area, storing the result in a giant table. I think we only did it for postcodes which were within 100 miles or something similar, to cut down on the amount of preprocessing.
Even then, a simple DB wasn't quite as fast as we wanted - so we stored the results in a giant file, with a single byte per source/destination pair. (We had a fixed sequence of source postcodes and target postcodes, so we didn't need to specify those.) At that point, computing a travel time consisted of:
Work out postcode areas (substring work)
Find the index of each postcode area within the sequence
Check if we'd loaded that part of the file (we lazy loaded for startup speed)
Load the row if necessary, and just access it otherwise
The bytes were on a sliding scale of accuracy, so for the first 60 minutes it was on a per-minute basis, then each extra value meant an extra 2 minutes, then 5 etc. (Those aren't the exact values, but it was something like that.)
When you've worked out "good candidates" you can ask an on-site API or the Google Maps API for more accurate directions for your exact postcodes, of course.
You want to look for a spatial-index or a space-filling-curve. A spatial index reduce the 2d problem to a 1d problem and recursivley subdivide the surface into smaller tiles but it is basically a reordering of the tiles. You can subdivide the surface either with an index or a string using 4 characters. The latter one can be useful to you because it let you query the string with all string operation hidden in the database engine. You want to look for Nick's spatial index quadtree hilbert-curve blog.

Resources