Data Parallelism in Ant Colony Optimization - multithreading

I have been trying to understand how ACO optimization can be implemented with data parallelism. I have read some content after searching in Google. I only need the basic idea in simple way. Most of the papers are talking about everything else instead of the main thing in simple words.
What I understood so far is, we will make it work parallel by using multi-tasking(threading). But am not sure what each thread would do or how we could separate it into threads without causing trouble.
Does it means that we should create separate thread for each ants? But that would cause lots of threads to be created! So if there are 200 ants, then 200 threads?
Am still having confusion at this data parallelism topic in ACO. I would really love to hear in simple words on how we would implement it parallely.

A few simple ideas to run ACO in parallel
Since you have already read up on ACO, here are a few simple ideas on ways to run ACO in parallel. Rather than getting caught up in multiple-threads and mutli-tasking, it might be helpful to think in terms of 'parallel compute resources' at your disposal.
ACO is one case of Agent-Based Simulation (ABS), and ABS lends itself particularly well to parellization.
Simple Options
Option 1. Run a full version of ACO in each of the parallel resources.
Code your ACO algorithm, run it in parallel fashion. (Since there is a stochastic element to the algorithm, you can then look for the 'best' solution for your problem.)
Option 2. To explore effects of varying ACO parameters
Like any simulation approach, any ACO implementation has a large number of runtime parameters: Number of vertices, time to run, Number of ants, Pheromone evaporation rates, probability functions to choose path options and many more. When you mutliply these options, they add up to some large number of cases to be run. Divide up the work among your parallel compute resources.
The two options mentioned above are sometimes referred to as 'embarrassingly' parallel. Very easy to implement (think of it as a Design of Experiments) and you get back a whole matrix of results, and you can make conclusions by studying what effect the changes in the parameters had on the solution.
Option with solution sharing
Option 3: Master-Slave approach, with Partial Solution sharing
Going up one more level in complexity, we can use each node to contribute its 'knowledge/findings' to the overall problem solution. This is sometimes called a master-slave approach. The master is trying to solve the overall problem (Could be TSP, or some similar complex problem) and each 'slave' is solving some aspect of it, but with some fairly simple algorithm. The idea is that when combined they produce powerful results.
After a certain number of iterations, the solutions are passed back and forth, with 'bad' solutions thrown out. Some variant of the Map-Shuffle-Reduce paradigm would do that. The master evaluates the current best solution, and that is transferred back to each 'slave' node (Example: the latest overall pheromone levels are given to all the slave nodes). The next round of solving resumes.
Option 3 has tons of nuanced variations, and some people spend their entire lives improving various aspects of it.
Hope some of these ideas help.

Related

How do I write a Multi threaded Alpha-Beta Search algorithm?

I'm trying to create a chess engine using an alpha beta minimax search algorithm, but the code is too slow. I've done all the optimisations I can think of, but it is still very slow in a single thread. I looked at the source code of some other engines to see how they do it and the chess programming wiki (https://www.chessprogramming.org/Parallel_Search#Parallel_Alpha-Beta), but the the code is beyond my level and I don't understand them. I couldn't find any written sources or code snippets either.
Can someone explain how to efficiently implement threading in an alpha-beta search algorithm? Thanks.
Alpha-beta is an inherently sequential algorithm as your alpha and beta values get updated continuously thorough the search and cutoffs are decided based on these values. For this reason getting any speedups by increasing the amount of threads is very hard, and the more threads you throw at it, the smaller the gains will be.
However there's still several ways to do it, most of them fairly complicated and they scale extremely poorly with more threads. The go to algorithm used to be the Young Brothers wait concept, it is a fairly complicated algo and it was used for example by Stockfish until a few years back. However with the increasing amount of cores available on modern computers the scaling was very poor and the code very complex. Today most modern engines use something called Lazy SMP. This algorithm is almost as simple as it can be and scales better than the others.
In lazy SMP all you have to do is start the exact same search as you would normally do, just on multiple threads. It relies on having a working transposition table through which the threads communicate with each other. The threads will never be exactly in sync and the randomness will lead each thread to explore slightly different parts of the search tree and then save their results into the transposition table, where it might be used by another thread. Of course there is a lot of repeating work done by each thread, however it is still better than trying to be clever about splitting the work and slowing down the algorithm, and this is especially true when you start scaling up the amount of threads.
I recommend you take a look at the chess programming wiki, where you can even find some pseudo code on how to implement it.
https://www.chessprogramming.org/Lazy_SMP
Though i should also point out that if what you are looking for is improving your time to depth, implementing multithreading won't do all that much for you (and in some extreme cases it might actually even slow it down!). What you need instead is more aggresive pruning of the search tree and more efficient implementation (eg. no memory allocations, so the garbage collector never has to run, etc.).

Clojure: Create and manage multiple threads

I wrote a program which needs to process a very large dataset and I'm planning to run it with multiple threads in a high-end machine.
I'm a beginner in Clojure and i'm lost in the myriad of tools at disposal -
agents, futures, core.async (and Quartzite?). I would like to know which one is most suited for this job.
The following describes my situation:
I have a function which transforms some data and store it in database.
The argument to the said function is popped from a Redis set.
Run the function in several separate threads as long as there is a value in the Redis set.
For simplicity, futures can't be beat. They create a new thread, and return a value from it. However, often you need more fine-grained control than they provide.
The core.async library has nice support for parallelism (via pipeline, see below), and it also provides automatic back-pressure. You have to have a way to control the flow of data such that no one's starving for work, or burdened by too much of it. core.async channels must be bounded, and this helps with this problem. Also, it's a pretty logical model of your problem: taking a value from a source, transforming it (maybe using a transducer?) with some given parallelism, and then putting the result to your database.
You can also go the manual route of using Java's excellent j.u.concurrent library. There are low level primitives as well as thread management tools for thread pools. All of this is accessible within clojure.
From a design standpoint, it comes down to whether you are more CPU-bound or I/O-bound. This affects decisions such as whether or not you will perform parallel reads from redis and writes to your database. If you are CPU-bound and thus your bottleneck is the computation, then it wouldn't make much sense to parallelize your reads from redis, or your writes to your database, would it? These are the types of things to consider.
You really have two problems to solve: (1) your familiarity with clojure's/java's concurrency mechanisms, and (2) your approach to this problem (i.e., how would you approach this problem, irrespective of the language you're using?). Once you solve #2, you will have a much better idea of which tools to use that I mentioned above, and how to use them.
Sounds like you may have a
good
embarrassingly parallel problem
to solve. In that case, you could start simply by coding up your
processing into a top-level function that processes the first datum.
Once that's working, wrap it in
a map to handle all of the
data sequentially (serially, one-at-a-time).
You might want to start tackling the bigger problem with just a few
items from your data set. That will make your testing smoother and
faster.
After you have the map working, it's time to just add a p
(parallel) to your code to make it
a pmap. This is a very
rewarding way to heat up your
machine.
Here is
a discussion about the number of threads pmap uses.
The above is the simplest approach. If you need finer control over
the concurrency, the
this concurrency screencast explores
the use cases.
It is hard to be precise w/o knowing the details of your problem. There are several choices as you mention:
Plain Java threads & threadpools. If your problem is similar to a pre-existing Java solution, this may be the most straightforward.
Simple Clojure threading with future et al. Kicking off a thread with future and getting the result in a promise is very easy.
Replace map with pmap (parallel map). This can help in simple cases that are primarily map/reduce oriented.
The Claypoole library: Lots of tools to make multithreading simpler and easier. Please see their GitHub project and the Clojure/West talk.

Matlab parallel programming

First of all, sorry for the general title and, probably, for the general question.
I'm facing a dilemma, I always worked in c++ and right now I'm trying to do something very similar to one of my previous projects, which is to parallelize a single-target object tracker written in matlab in order to assign to each concurrent thread an object and then gather the results at each frame. In c++, I used boost thread API to do so, and with good results. Is it possibile in matlab? Reading around I'm finding it rather unclear, I'm reading a lot about the parfor loop but that's pretty much it? Can I impose synchronization barriers similar to boost::barrier in order to stop each thread after each frame and wait for other before going to the next frame?
Basically, I wish to initialize some common data structures and then launch a few parallel instances of the tracker, which shares some data and take different objects to track as input. Any suggestion will be greatly appreciated!
parfor is only one piece of functionality provided by Parallel Computing Toolbox. It's the simplest, and most people find it the most immediately useful, which is probably why most of the resources your research has found discuss only that.
parfor gives you a way to very simply parallelize "embarassingly parallel" tasks, in other words tasks that are independent and do not require any communication between them (such as, for example, parameter sweeps or Monte Carlo analyses).
It sounds like that's not what you need. From your question, I'm not entirely sure exactly what you do need; but since you mention synchronization, barriers, and waiting for one task to finish before another moves forward, I would suggest you take a look at features of Parallel Computing Toolbox such as labSend, labReceive, labBarrier, and spmd, that allow you to implement a more message-passing style of parallelization. There is plenty more functionality in the toolbox than just parfor.
Also - don't be afraid to ask MathWorks for advice on this, there are several (free) recorded webinars and tutorials on this sort of parallelization that they can point you towards.
Hope that helps!

Software to Tune/Calibrate Properties for Heuristic Algorithms

Today I read that there is a software called WinCalibra (scroll a bit down) which can take a text file with properties as input.
This program can then optimize the input properties based on the output values of your algorithm. See this paper or the user documentation for more information (see link above; sadly doc is a zipped exe).
Do you know other software which can do the same which runs under Linux? (preferable Open Source)
EDIT: Since I need this for a java application: should I invest my research in java libraries like gaul or watchmaker? The problem is that I don't want to roll out my own solution nor I have time to do so. Do you have pointers to an out-of-the-box applications like Calibra? (internet searches weren't successfull; I only found libraries)
I decided to give away the bounty (otherwise no one would have a benefit) although I didn't found a satisfactory solution :-( (out-of-the-box application)
Some kind of (Metropolis algorithm-like) probability selected random walk is a possibility in this instance. Perhaps with simulated annealing to improve the final selection. Though the timing parameters you've supplied are not optimal for getting a really great result this way.
It works like this:
You start at some point. Use your existing data to pick one that look promising (like the highest value you've got). Set o to the output value at this point.
You propose a randomly selected step in the input space, assign the output value there to n.
Accept the step (that is update the working position) if 1) n>o or 2) the new value is lower, but a random number on [0,1) is less than f(n/o) for some monotonically increasing f() with range and domain on [0,1).
Repeat steps 2 and 3 as long as you can afford, collecting statistics at each step.
Finally compute the result. In your case an average of all points is probably sufficient.
Important frill: This approach has trouble if the space has many local maxima with deep dips between them unless the step size is big enough to get past the dips; but big steps makes the whole thing slow to converge. To fix this you do two things:
Do simulated annealing (start with a large step size and gradually reduce it, thus allowing the walker to move between local maxima early on, but trapping it in one region later to accumulate precision results.
Use several (many if you can afford it) independent walkers so that they can get trapped in different local maxima. The more you use, and the bigger the difference in output values, the more likely you are to get the best maxima.
This is not necessary if you know that you only have one, big, broad, nicely behaved local extreme.
Finally, the selection of f(). You can just use f(x) = x, but you'll get optimal convergence if you use f(x) = exp(-(1/x)).
Again, you don't have enough time for a great many steps (though if you have multiple computers, you can run separate instances to get the multiple walkers effect, which will help), so you might be better off with some kind of deterministic approach. But that is not a subject I know enough about to offer any advice.
There are a lot of genetic algorithm based software that can do exactly that. Wrote a PHD about it a decade or two ago.
A google for Genetic Algorithms Linux shows a load of starting points.
Intrigued by the question, I did a bit of poking around, trying to get a better understanding of the nature of CALIBRA, its standing in academic circles and the existence of similar software of projects, in the Open Source and Linux world.
Please be kind (and, please, edit directly, or suggest editing) for the likely instances where my assertions are incomplete, inexact and even flat-out incorrect. While working in related fields, I'm by no mean an Operational Research (OR) authority!
[Algorithm] Parameter tuning problem is a relatively well defined problem, typically framed as one of a solution search problem whereby, the combination of all possible parameter values constitute a solution space and the parameter tuning logic's aim is to "navigate" [portions of] this space in search of an optimal (or locally optimal) set of parameters.
The optimality of a given solution is measured in various ways and such metrics help direct the search. In the case of the Parameter Tuning problem, the validity of a given solution is measured, directly or through a function, from the output of the algorithm [i.e. the algorithm being tuned not the algorithm of the tuning logic!].
Framed as a search problem, the discipline of Algorithm Parameter Tuning doesn't differ significantly from other other Solution Search problems where the solution space is defined by something else than the parameters to a given algorithm. But because it works on algorithms which are in themselves solutions of sorts, this discipline is sometimes referred as Metaheuristics or Metasearch. (A metaheuristics approach can be applied to various algorihms)
Certainly there are many specific features of the parameter tuning problem as compared to the other optimization applications but with regard to the solution searching per-se, the approaches and problems are generally the same.
Indeed, while well defined, the search problem is generally still broadly unsolved, and is the object of active research in very many different directions, for many different domains. Various approaches offer mixed success depending on the specific conditions and requirements of the domain, and this vibrant and diverse mix of academic research and practical applications is a common trait to Metaheuristics and to Optimization at large.
So... back to CALIBRA...
From its own authors' admission, Calibra has several limitations
Limit of 5 parameters, maximum
Requirement of a range of values for [some of ?] the parameters
Works better when the parameters are relatively independent (but... wait, when that is the case, isn't the whole search problem much easier ;-) )
CALIBRA is based on a combination of approaches, which are repeated in a sequence. A mix of guided search and local optimization.
The paper where CALIBRA was presented is dated 2006. Since then, there's been relatively few references to this paper and to CALIBRA at large. Its two authors have since published several other papers in various disciplines related to Operational Research (OR).
This may be indicative that CALIBRA hasn't been perceived as a breakthrough.
State of the art in that area ("parameter tuning", "algorithm configuration") is the SPOT package in R. You can connect external fitness functions using a language of your choice. It is really powerful.
I am working on adapters for e.g. C++ and Java that simplify the experimental setup, which requires some getting used to in SPOT. The project goes under name InPUT, and a first version of the tuning part will be up soon.

What's the opposite of "embarrassingly parallel"?

According to Wikipedia, an "embarrassingly parallel" problem is one for which little or no effort is required to separate the problem into a number of parallel tasks. Raytracing is often cited as an example because each ray can, in principle, be processed in parallel.
Obviously, some problems are much harder to parallelize. Some may even be impossible. I'm wondering what terms are used and what the standard examples are for these harder cases.
Can I propose "Annoyingly Sequential" as a possible name?
Inherently sequential.
Example: The number of women will not reduce the length of pregnancy.
There's more than one opposite of an "embarrassingly parallel" problem.
Perfectly sequential
One opposite is a non-parallelizable problem, that is, a problem for which no speedup may be achieved by utilizing more than one processor. Several suggestions were already posted, but I'd propose yet another name: a perfectly sequential problem.
Examples: I/O-bound problems, "calculate f1000000(x0)" type of problems, calculating certain cryptographic hash functions.
Communication-intensive
Another opposite is a parallelizable problem which requires a lot of parallel communication (a communication-intensive problem). An implementation of such a problem will scale properly only on a supercomputer with high-bandwidth, low-latency interconnect. Contrast this with embarrassingly parallel problems, implementations of which run efficiently even on systems with very poor interconnect (e.g. farms).
Notable example of a communication-intensive problem: solving A x = b where A is a large, dense matrix. As a matter of fact, an implementation of the problem is used to compile the TOP500 ranking. It's a good benchmark, as it emphasizes both the computational power of individual CPUs and the quality of interconnect (due to intensity of communication).
In more practical terms, any mathematical model which solves a system of partial differential equations on a regular grid using discrete time stepping (think: weather forecasting, in silico crash tests), is parallelizable by domain decomposition. That means, each CPU takes care of a part of the grid, and at the end of each time step the CPUs exchange their results on region boundaries with "neighbour" CPUs. These exchanges render this class of problems communication-intensive.
Im having a hard time to not post this... cause I know it don't add anything to the discussion.. but for all southpark fans out there
"Super serial!"
"Stubbornly serial"?
The opposite of embarassingly parallel is Amdahl's Law, which says that some tasks cannot be parallel, and that the minimum time a perfectly parallel task will require is dictated by the purely sequential portion of that task.
"standard examples" of sequential processes:
making a baby: “Crash programs fail because they are based on theory that, with nine women pregnant, you can get a baby a month.” -- attributed to Werner von Braun
calculating pi, e, sqrt(2), and other irrational numbers to millions of digits: most algorithms sequential
navigation: to get from point A to point Z, you must first go through some intermediate points B, C, D, etc.
Newton's method: you need each approximation in order to calculate the next, better approximation
challenge-response authentication
key strengthening
hash chain
Hashcash
P-complete (but that's not known for sure yet).
I use "Humiliatingly Sequential"
Paul
If ever one should speculate what it would be like to have natural, incorrigibly sequential problems, try
blissfully sequential
to counter 'embarrassingly parallel'.
"Gladdengly Sequential"
It all has to do with data dependencies. Embarrassingly parallel problems are ones for which the solution is made up of many independent parts. Problems with the opposite of this nature would be ones that have massive data dependencies, where there is little to nothing that can be done in parallel. Degeneratively dependent?
The term I've heard most often is "tightly-coupled", in that each process must interact and communicate often in order to share intermediate data. Basically, each process depends on others to complete their computation.
For example, matrix processing often involves sharing boundary values at the edges of each array partition.
This is in contrast to embarassingly parallel (or loosely-coupled) problems where each part of the problem is completely self-contained, and no (or very little) IPC is needed. Think master/worker parallelism.
Boastfully sequential.
I've always preferred 'sadly sequential' ala the partition step in quicksort.
"Completely serial?"
It shouldn't really surprise you that scientists think more about what can be done than what cannot be done. Especially in this case, where the alternative to parallelizing is doing everything as one normally would.
Completely non-parallelizable?
Pessimally parallelizable?
The opposite is "disconcertingly serial".
taking into acount that parallelism is the act of doing many jobs in the same time step t. the opposite could be time-sequential problems
An example inherently sequential problem.
This is common in CAD packages and some kinds of engineering analysis.
Tree traversal with data dependencies between nodes.
Imagine traversing a graph and adding up weights of nodes.
You just can't parallelise it.
CAD software represents parts as a tree, and to render to object you have to traverse the tree.
For this reason, cad workstations use less cores and faster, rather than many cores.
Thanks for reading.
You could of course, however I think that both 'names' are a non-issue.
From a functional programming perspective you could say that the 'annoyingly sequential' part is the smallest more or less independent part of an algorithm.
While the 'embarrassingly parallel' if not indeed taking into a parallel approach is bad coding practice.
Thus I don't see a point in given these things a name if good coding practice is always to brake up your solution into independent pieces, even if you at that moment don't take advantage of parallelism.

Resources