I am actually looking into some predicting algorithms. My Question is I have set of threads in a process lets say T1, T2 , T3 ... T4. Initially I will be getting some request basing on which I run these threads in a order say T2-T1-T3-T4 and for other request T3-T1-T2-T4 ... and so on for another N iterations. If I want to predict future M request order of execution. which algorithm can I use and how can I predict??
Related
I would like to create a variable number of threads in Prolog and make the main thread wait for all of them.
I have tried to make a join for each one of them in the predicate but it seems like they are waiting one for the other in a sequential order.
I have also tried storing the ids of the threads in a list and join each one after but it still isn't working.
In the code sample, I have also tried passing the S parameters in thread_join in the recursive call.
thr1(0):-!.
thr1(N):-
thread_create(someFunction(N),Id, []),
thread_join(Id, S),
N1 is N-1,
thr1(N1).
I expect the N predicates to overlap results when doing some print, but they are running in a sequential order.
Most likely the calls to your someFunction/1 predicate succeed faster than the time it takes to create the next thread, which is a relatively heavy process as SWI-Prolog threads are mapped to POSIX threads. Thus, to actually get overlapping results, the computation time of the thread goals must exceed thread creation time. For a toy example of accomplishing that, see:
https://github.com/LogtalkDotOrg/logtalk3/tree/master/examples/threads/sync
Scenario is as follows:
We have 3 tasks such as T1, T2, T3. T1 is a time consuming process and output of T1 is being utilized in T2. The Order of operation is T1-T2-T3.
As of node.js programming following could be thought of.
T1: fs.readFile(filename, mode, callback); // most expensive computation
T2: get file content from T1 and parse as of certain logic.
T3: Generate report basis of your found content.
Note: I am expecting an answer how to implement asynchronous programming for T1 or it can be done only with synchronous way. :)
You may have the option to not read the file at once, but do e.g. line based parsing, and fire up your events after you read a line.
This will likely complicate your logic quite a lot, and it is really dependent on your T2-T3 costs if it is worth the effort. (It would likely only help if T2 and T3 are also somewhat costly, and can be executed on a different thread)
Assume there are 2 threads performing operations on a shared queue q. The lines of code for each thread are numbered, and initially the queue is empty.
Thread A:
A1) q.enq(x)
A2) q.deq()
Thread B:
B1) q.enq(y)
Assume that the order of execution is as follows:
A1) q.enq(x)
B1) q.enq(y)
A2) q.deq()
and as a result we get y (i.e. q.deq() returns y)
This execution is based on a well-known book and is said to be sequentially consistent. Notice that the method calls don’t overlap. How is that even possible? I believe that Thread A executed A1 without actually updating the queue until it proceeded to line A2 but that's just my guess. I'm even more confused if I look at this explanation from The Java Language Specification:
Sequential consistency is a very strong guarantee that is made about visibility and ordering in an execution of a program. Within a sequentially consistent execution, there is a total order over all individual actions (such as reads and writes) which is consistent with the order of the program, and each individual action is atomic and is immediately visible to every thread.
If that was the case, we would have dequeue x.
I'm sure I'm somehow wrong. Could somebody throw a light on this?
Note that the definition of sequential consistency says "consistent with program order", not "consistent with the order in which the program happens to be executed".
It goes on to say:
If a program has no data races, then all executions of the program will appear to be sequentially consistent.
(my emphasis of "appear").
Java's memory model does not enforce sequential consistency. As the JLS says:
If we were to use sequential consistency as our memory model, many of the compiler and processor optimizations that we have discussed would be illegal. For example, in the trace in Table 17.3, as soon as the write of 3 to p.x occurred, subsequent reads of that location would be required to see that value.
So Java's memory model doesn't actually support sequential consistency. Just the appearance of sequential consistency. And that only requires that there is some sequentially consistent order of actions that's consistent with program order.
Clearly there is some execution of threads A and B that could result in A2 returning y, specifically:
B1) q.enq(y)
A1) q.enq(x)
A2) q.deq()
So, even if the program happens to be executed in the order you specified, there is an order in which it could have been executed that is "consistent with program order" for which A2 returns y. Therefore, a program that returns y in that situation still gives the appearance of being sequentially consistent.
Note that this shouldn't be interpreted as saying that it would be illegal for A2 to return x, because there is a sequentially consistent sequence of operations that is consistent with program order that could give that result.
Note also that this appearance of sequential consistency only applies to correctly synchronized programs. If your program is not correctly synchronized (i.e. has data races) then all bets are off.
All,
I would like to use Ilnumerics for computations to be made in parallel. They are completely uncoupled. I would need it for
1) random restarts for an optimiser (especially stochastic optimiser, e.g. simulated annealing) : solving the same optimisation problems starting in parallel from different points:
e.g.: argmin_x f(x) starting from x0_h h = 1,2,..,K
2) same optimisation to be run over a sets of uncoupled data; as an example, consider the following unconstrained optimisation problem:
given a function f (R^d x R^p) --> R of x \in R^d and p parameters p\in R^d
solve argmin_x f(x,p_h), h = 1, 2, ..., K.
I hope the notation is clear enough.
Would it be possible to run this loop in parallel, executing everytime some lambda expression involving ILnumerics objects and leveraging on multicores architectures?
Thanks in advance, as usual,
GL
It depends: ILNumerics automatically parallelizes mathematical expressions like
C = A + B[":;2"] / 0.4 * pinv(C) ...
By attempting to run multiple instances of such expressions in parallel, using multiple threads from the thread pool, you would end up producing a lot of contention by too many threads competing for the CPU time slots. In the result your algorithm may runs slower than without parallelizing it.
So, in that case you may disable the internal automatic parallelization ILNumerics does transparently for you:
Settings.MaxNumberThreads = 1;
Expressions like the one above will get evaluated within a single thread afterwards. However, now you are responsible for distributing computational tasks over multiple threads. And moreover, you will have to lock your arrays accordingly - because ILNumerics is not thread safe in general! This allows you to write concurrently to your output arrays but also brings the burdon of having to implement a correct locking scheme...
What is the difference between a dead lock and a race around condition in programming terms?
Think of a race condition using the traditional example. Say you and a friend have an ATM cards for the same bank account. Now suppose the account has $100 in it. Consider what happens when you attempt to withdraw $10 and your friend attempts to withdraw $50 at exactly the same time.
Think about what has to happen. The ATM machine must take your input, read what is currently in your account, and then modify the amount. Note, that in programming terms, an assignment statement is a multi-step process.
So, label both of your transactions T1 (you withdraw $10), and T2 (your friend withdraws $50). Now, the numbers below, to the left, represent time steps.
T1 T2
---------------- ------------------------
1. Read Acct ($100)
2. Read Acct ($100)
3. Write New Amt ($90)
4. Write New Amt ($50)
5. End
6. End
After both transactions complete, using this timeline, which is possible if you don't use any sort of locking mechanism, the account has $50 in it. This is $10 more than it should (your transaction is lost forever, but you still have the money).
This is a called race condition. What you want is for the transaction to be serializable, that is in no matter how you interleave the individual instruction executions, the end result will be the exact same as some serial schedule (meaning you run them one after the other with no interleaving) of the same transactions. The solution, again, is to introduce locking; however incorrect locking can lead to dead lock.
Deadlock occurs when there is a conflict of a shared resource. It's sort of like a Catch-22.
T1 T2
------- --------
1. Lock(x)
2. Lock(y)
3. Write x=1
4. Write y=19
5. Lock(y)
6. Write y=x+1
7. Lock(x)
8. Write x=y+2
9. Unlock(x)
10. Unlock(x)
11. Unlock(y)
12. Unlock(y)
You can see that a deadlock occurs at time 7 because T2 tries to acquire a lock on x but T1 already holds the lock on x but it is waiting on a lock for y, which T2 holds.
This bad. You can turn this diagram into a dependency graph and you will see that there is a cycle. The problem here is that x and y are resources that may be modified together.
One way to prevent this sort of deadlock problem with multiple lock objects (resources) is to introduce an ordering. You see, in the previous example, T1 locked x and then y but T2 locked y and then x. If both transactions adhered here to some ordering rule that says "x shall always be locked before y" then this problem will not occur. (You can change the previous example with this rule in mind and see no deadlock occurs).
These are trivial examples and really I've just used the examples you may have already seen if you have taken any kind of undergrad course on this. In reality, solving deadlock problems can be much harder than this because you tend to have more than a couple resources and a couple transactions interacting.
As always, use Wikipedia as a starting point for CS concepts:
http://en.wikipedia.org/wiki/Deadlock
http://en.wikipedia.org/wiki/Race_condition
A deadlock is when two (or more) threads are blocking each other. Usually this has something to do with threads trying to acquire shared resources. For example if threads T1 and T2 need to acquire both resources A and B in order to do their work. If T1 acquires resource A, then T2 acquires resource B, T1 could then be waiting for resource B while T2 was waiting for resource A. In this case, both threads will wait indefinitely for the resource held by the other thread. These threads are said to be deadlocked.
Race conditions occur when two threads interact in a negatve (buggy) way depending on the exact order that their different instructions are executed. If one thread sets a global variable, for example, then a second thread reads and modifies that global variable, and the first thread reads the variable, the first thread may experience a bug because the variable has changed unexpectedly.
Deadlock :
This happens when 2 or more threads are waiting on each other to release the resource for infinite amount of time.
In this the threads are in blocked state and not executing.
Race/Race Condition:
This happens when 2 or more threads run in parallel but end up giving a result which is wrong and not equivalent if all the operations are done in sequential order.
Here all the threads run and execute there operations.
In Coding we need to avoid both race and deadlock condition.
I assume you mean "race conditions" and not "race around conditions" (I've heard that term...)
Basically, a dead lock is a condition where thread A is waiting for resource X while holding a lock on resource Y, and thread B is waiting for resource Y while holding a lock on resource X. The threads block waiting for each other to release their locks.
The solution to this problem is (usually) to ensure that you take locks on all resources in the same order in all threads. For example, if you always lock resource X before resource Y then my example can never result in a deadlock.
A race condition is something where you're relying on a particular sequence of events happening in a certain order, but that can be messed up if another thread is running at the same time. For example, to insert a new node into a linked list, you need to modify the list head, usually something like so:
newNode->next = listHead;
listHead = newNode;
But if two threads do that at the same time, then you might have a situation where they run like so:
Thread A Thread B
newNode1->next = listHead
newNode2->next = listHead
listHead = newNode2
listHead = newNode1
If this were to happen, then Thread B's modification of the list will be lost because Thread A would have overwritten it. It can be even worse, depending on the exact situation, but that's the basics of it.
The solution to this problem is usually to ensure that you include the proper locking mechanisms (for example, take out a lock any time you want to modify the linked list so that only one thread is modifying it at a time).
Withe rest to Programming language if you are not locking shared resources and are accessed by multiple threads then its called as "Race condition", 2nd case if you locked the resources and sequences of access to shared resources are not defined properly then threads may go long waiting for the resources to use then its a case of "deadlock"