So, I have a service, which as one of its features allows my clients to optimize their driving routes for the day. Normally, they only have a dozen or so stops before returning to base, so I just use the MapQuest API (paid) which has an optimize route function. However, I just got a new client who has 40+ stops per day. However, the MapQuest API only allows 25 stops (start, 23 waypoints, end) with route optimization. So, does anyone have any ideas how I can best attack the problem of trying to optimize a route for 40+ stops?
So, yes, I know that the traveling salesman problem is a computationally difficult problem. The MapQuest API is super fast with the limited number of stops that they allow, and I have a paid subscription, so I can make multiple calls in a row without getting in trouble with them. So, some ideas that I've played around with is simply dividing the route in half, optimizing each half, and combining, but it seems to be lacking in efficacy. So, if anyone has tackled this, I'd love to hear your solution.
I know this team has worked on it and they can probably help you out.
Related
I am researching a problem that is pretty unique.
Imagine a roadside assistance company that wants to dynamically route its vehicles. Hence for each packet of new incidents wants to create routes that will satisfy them, according to some constraints (time constraints, road accessibility, vehicle - incident matching).
The company has an heterogeneous fleet of vehicle (motorbikes for easy cases, up to tow trucks for the hard cases) and each incident states it's uniqueness (we know if it wants just fuel, or needs towing).
There is no depot, only the vehicles roaming on the streets.
The objective is to dynamically create routes on the way, having in mind the minimization of time and the total traveled distance.
Have you ever met such a problem? Do you have any idea in which VRP variant it belongs?
I have seen two previous questions but unfortunately they don't fit with my problem.
The respected optaplanner - VRP but with no depot and Does optaplanner out of box support VRP with multiple trips and no depot, which are both open VRPs.
Unfortunately I don't have code right now, as I am still modelling the way I will approach this problem.
I am really sorry for creating a suggestion question and not a real one.
Thank you so much in advance.
It's a rich dynamic/realtime vehicle routing problem. You won't find an exact name for your problem, as when VRPs get too complex they don't fit inside any of the standard categories.
It's clearly a dynamic/realtime problem (the terms are used interchangeably) as you would typically only find out about roadside breakdowns at short notice.
Sometimes you're servicing a broken down car, which would be a single stop (so a vehicle routing problem). Sometimes you're towing a car, which would be a pick-up delivery problem. So you have a mix of both together.
You would want to get to the broken down vehicles ASAP and some would need fixing sooner than others (think a car broken down in a dangerous position on a motorway). You would therefore need soft time windows so you can penalise lateness instead of the standard hard time windows supported in most VRP formulations.
Also for you to be able to scale to larger problems, you need an incremental optimiser that can restart from the previous (possibly now infeasible) solution when new jobs are added, vehicle positions are changed etc. This isn't supported out of the box in the open source solvers I know of.
We developed a commercial engine which does the above. We started off using the jsprit library, which supports mixing single stop and pickup delivery problems together. We later had to replace jsprit due to the amount of code we had to override to get it running happily for realtime problems, however jsprit may still prove a useful starting point for you. We discuss some of the early technical obstacles we had to overcome in getting jsprit to handle realtime problems in this white paper.
I plan on creating an ticket "pass" platform. Basically, imagine you come to a specific city, you buy a "pass" for several days (for which you get things like free entrance to museums and other attractions).
Now, the main question that bothered me for several days is: How will museum staff VALIDATE if the pass is valid? I see platforms like EventBrite etc. using barcodes/QR codes, but that is not quite a viable solution because we'll need to get a good camera phone for every museum to scan the code and that's over-budget. So I was thinking of something like a simple 6-letter code, for eg: GHY-AGF. There are 26^6 = 308 million combinations, which is a tough nut to crack.
I've asked a question on the StackExchange security site about this, and the main concern was the brute forcing. However, I imagine someone doing this kind of attack if: they had access of doing pass lookup. The only people that will be able to do this are:
1) The museum staff (for which there will be a secure user/pass app, and rate limits of no more than 1000 look-ups per day)
b) Actual customers to check the validity of their pass, and this will be protected with Google ReCaptcha v3, which doesn't sacrifice user experience like with v1. Also rate limits and IP bans will be applied
Is a brute force STILL a viable attack if I implement these 2 measures in place? Also, is there something else I'm missing in terms of security, when using this approach?
By the way, Using a max. 6-character-long string as a unique "pass" has many advantages portable-wise, for eg. you could print "blank" passes, where the user will be give instructions on how to obtain it. After they pay, they'll be given a code like: GAS-GFS, which they can easily write with a pen on the pass. This is not possible with a QR/barcode. Also, the staff can check the validity in less than 10 seconds, by typing it in a web-app, or sending an SMS to check if it's valid. If you're aware of any other portable system like this, that may be more secure, let me know.
Brute forcing is a function of sparseness. How many codes at any given time are valid out of how large a space? For example, if out of your 308M possibilities, 10M are valid (for a given museum), then I only need ~30 guesses to hit a collision. If only 1000 are valid, then I need more like 300k guesses. If those values are valid indefinitely, I should expect to hit one in less than a year at 1000/day. It depends on how much they're worth to figure out if that's something anyone would do.
This whole question is around orders of magnitude. You want as many as you can get away with. 7 characters would be better than 6 (exactly 26x better). 8 would be better than that. It depends on how devoted your attackers are and how big the window is.
But that's how you think about the problem to choose your space.
What's much more important is making sure that codes can't be reused, and are limited to a single venue. In all problems like this, reconciliation (i.e. keeping track of what's been issued and what's been used) is more important than brute-force protection. Posting a number online and having everyone use it is dramatically simpler than making millions of guesses.
I am trying to get multiple time-frame data of different trading instrument ( _Symbol ) from MetaTrader4 Terminal to a node.
How can I do it?
Can we do it from the same EA inside a MetaTrader4 Terminal?
A.1: Yes, we can.
A.2: No, that initial idea is not a good one.
While the intention is clear, the idea to use a single EA to send live-data for multiple trading instruments is not working for the said interest well.
MQL4 code-execution environment has some fixed, hard-wired internal logic and due to these + plus due to the reality, how Capital Markets and Broker-type Market access mediators work, the solo-EA will never fit these requirements.
A simple call to
iOpen( aTradingInstrumentSymbolNAME, // iHigh, iLow, iClose, iVolume, iTime
aSelectedTimeFrameDefinedCODE,
aRelativeBarPTR
)
is by far not enough.
Professional solution will require a lot of care for a real-time handling capabilities, for unmasking the actual flow of mutually hiding events, for achieving minimalistic processing latencies, so a quite high engineering expertise will be needed.
Start with learning the basics about Scripts, benchmark all your critical code-sections with recording their actual durations in [us] and assure, your code will remain non-blocking under all circumstances. This will decide, whether more than one code-execution thread(s) will be necessary in prime-time / for peak-hour.
Having managed that, your way just started to lead in a direction towards your expected result.
Next one has to decide about a feasible inter-process / distributed-computing data-flow and signalling, needed for inter-platform integration.
Last, but not least, important point is the legal-side of such undertaking. It depends both on your local juri§$§$§diction and Broker's Terms & Conditions as no one would enjoy to celebrate a technically well mastered Project from inside of jail.
All that, quite an interesting Project.
iOpen(Symbol(),PERIOD_M1,1) - is the way to get data from M1 ( last bar ), if you need another timeframe - replace PERIOD_M1 with another ENUM_TIMEFRAMES. So what is the problem? Usually StackOverflow requires to see your MCVE-based example to help you.
Okay, first of all I want to tell you that I am new to all this techniques mentioned in the title.
I want to make an new app. Think of it as a real time trading engine (like for stocks for example).
So, there are two things that really matter:
Speed / Performance: Everyone has to see trades in realtime
Security: Same trades can be made simultaneously but only one can be successful
I thought about an approach like this:
If a user wants to buy 10 peaces of stock X for $100 each he places an order which I store with Redis (speed) and push it to all clients with socket.io. Well, as soon as another user wants to sell 15 peaces for $100 the script should check if there is an open buy order. If so, it saves it as a successfull transaction in MongoDB (persistance) and closes the buy order of 10 peaces.
In this example 5 peaces are left. The script would display that with a calculation like this: 15 (sell at $100) minus 10 (buy at $100) equals 5 left. Every time someone want's to trade something this calculation would be made because I don't know how many stocks are left for trading else.
Edit: Or I could subtract 10 peaces of the 15 peaces in Redis so that I don't need to calculate every time. But if something would go wrong, I wouldn't know what the original data was. That's a problem.
Now the questions are:
Would you make it like this? Better ideas maybe?
What would happen if two users make the exact same order in the same time? Could it happen, that it gets stored two times in MongoDB as different successfull transactions? Of course you could run an audit over Redis and MongoDB and compare it. But that would be a horrible solution.
Hope you understand what I'm trying to ask. Thanks in advance!
First of all if you do not know anything from the stack you are using, it is not a good idea to tell I need high performance (high availability, good security and so on). Being absolutely new to all the tools you are using you should be happy if it will just work.
As for your question: first of all take a look how other people have done similar things. Here is an open source bitcoin trading engine which uses node.js which makes it an excellent example to study (it is complex, so take a deep breath). If you want to use mongo you need to know that it does not support transactions, so you need to take a look how to implement them by yourself there. These two examples are really good in explaining it.
So I am verifying new operational management systems, and one of these OS's sends pick lists to a scale-able number of handheld devices. It sends these using messages, and their pick lists may contain overlapping jobs. So in my virtual world, I need to make sure that two simulated humans don't pick the same job - whenever someone picks a job, all the job lists get refreshed, so that the picked job doesn't appear on anyone else's handheld anymore, but for me the message is still in the queue being handled, so I have to make sure to discard that option.
Basically I have this giant list with a mutex, and the more "people" hitting it faster, the slower I can handle messages, to the point where I'm no longer at real-time, which is bad, because I can't actually validate the system because I can't keep up with the messages. (two guys on the same isle will recognize that one is going to pick one object and the next guy should pick the 2nd item, but I need to check every single job i'm about to pick and see if it has been claimed by someone else already)
I've considered localized binning of the lists, but it actually doesn't solve the problem in the stupid case that breaks it anyway, tons of people working on the same row. Now granted this would probably be confusing for the real people as well, as in real life they need to do the same resolution, but I'm curious what the currently accepted "best" solution to this problem is.
PS - I already am implementing this in c++ and it's fast, fast enough that in any practical test I don't "need" this question answered, it's more because I'm curious that I'm asking.
Thanks in advance!
I see a problem in the design "giant list with (one) mutex". You simply can't provide the whole list in synchronized fashion, if the list size and/or access rate is unlimited. Basic math works against you. So what i would do is a mutexed flag on each job. You can't prevent a job from being displayed on someone's screen, but you can assure that he gets a graceful "no more available" error and THEN the updated list. If you ever wanted to reserve a seat on highly popular gig, you may have witnessed the solution.