I have recently observed in Java (while implementing a deep recursive function call), that the stack size for thread is more than the process.
With this I mean, E.g. The thread could execute approx 30,000 recursive calls
while the program without thread could only go to 10,000 recursive calls to the same function.
Can any one suggest why is it so?
For better understanding and context, Please try to run the Java code as it is and see the messages printout on the console....
package com.java.concept;
/**
* This provides a mechanism to increase the call stack size, by starting the thread in the caller we can increase it
* Result were 3 times higher
*/
public class DeepRecursionCallStack {
private static int level = 0;
public static long fact(int n) {
level++;
return n < 2 ? n : n * fact(n - 1);
}
public static void main(String[] args) throws InterruptedException {
Thread t = new Thread(null, null, "DeepRecursionCallStack", 1000000) {
#Override
public void run() {
try {
level = 0;
System.out.println(fact(1 << 15));
} catch (StackOverflowError e) {
System.err.println("New thread : true recursion level was " + level);
System.err.println("New thread : reported recursion level was "
+ e.getStackTrace().length);
}
}
};
t.start();
t.join();
try {
level = 0;
System.out.println(fact(1 << 15));
} catch (StackOverflowError e) {
System.err.println("Main code : true recursion level was " + level);
System.err.println("Main code : reported recursion level was "
+ e.getStackTrace().length);
}
}
}
Related
my question is really simple : is this program valid as a simulation of the producer-consumer problem ?
public class ProducerConsumer {
public static void main(String[] args) {
Consumers c = new Consumers(false, null);
Producer p = new Producer(true, c);
c.p = p;
p.start();
c.start();
}
}
class Consumers extends Thread {
boolean hungry; // I want to eat
Producer p;
public Consumers(boolean hungry, Producer p) {
this.hungry = hungry;
this.p = p;
}
public void run() {
while (true) {
// While the producer want to produce, don't go
while (p.nice == true) {
// Simulation of the waiting, to check if it doesn't wait and
//`eat at the same time or any bad interleavings
System.out.println("Consumer doesn't eat");
try {
sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
for (int i = 0; i < 3; i++) {
try {
sleep(1000);
// Because the consumer eat, the producer is boring and
// want to produce, that's the meaning of the nice.
// This line makes the producer automatically wait in the
// while loop as soon as it has finished to produce.
p.nice = true;
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Consumer eat");
}
hungry = false;
System.out.println("\nConsumer doesn't eat anymore\n");
}
}
}
class Producer extends Thread {
boolean nice;
Consumers c;
public Producer(boolean nice, Consumers c) {
this.nice = nice;
this.c = c;
}
public void run() {
while (true) {
/**
* I begin with the producer so the producer, doesn't enter the
* loop because no food has been produce and hungry is
* exceptionally false because that's how work this program,
* so at first time the producer doesn't enter the loop.
*/
while (c.hungry == true) {
try {
sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Producer doesn't produce");
}
/**
* While the consumer wait in the while loop of its run method
* which means that nice is true the producer produce and during
* the production the consumer become hungry, which make the
* loop "enterable" for theproducer. The advantage of this is
* that the producer already knows that it has to go away after
* producing, the consumer doesn't need to tell him
* Produce become true, and it has no effect for the first round
*/
for (int i = 0; i < 3; i++) {
try {
sleep(1000);
c.hungry = true;
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Producer produce");
}
/**
* After a while, producer produce, the consumer is still in the
* loop, so we can tell him he can go, but we have to make
* sure that the producer doesn't pass the loop before the
* consumer goes out and set back produce to true will lead the
* consumer to be stuck again, and that's the role of the,
* c.hungry in the for loop, because the producer knows it has
* some client, it directly enter the loop and so can't
* starve the client.
*/
System.out.println("\nProducer doesn't produce anymore\n");
nice = false;
}
}
}
I didn't use any synchronization, wait or notify, so for a parallel programming problem it seems very strange, but when I run it there aren't any deadlocks, starvation or bad interleavings, the producer produces, then stop, the consumer eats and then stops and again as many time as I wanted.
Have I cheat somewhere ?
Thanks !
P.S- I don't know why but the first line of my question doesn't appear, it was just said hello
First of all, careful with the naming, "Consumers" is misleading, you are only simulating a lone consumer. Nice can also be replaced with "producing".
Secondly, you're using while(condition) sleep, which is basically the less efficient, non protected version of a semaphore wait, so you did use a form of wait.
E.G.
while (p.nice == true) {
System.out.println("Consumer doesn't eat");
try {
sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
is your P()
System.out.println("\nProducer doesn't produce anymore\n");
nice = false;
is your V()
This method, however is both inefficient (the waiting thread is either busy waiting or sleeps for a moment while being able to go) and unprotected (because there is no protection for simultaneous access of nice and hungry, you won't be able to expand this program with more Consumers or Producers).
Hope this helps.
I downloaded some existing code from internet. I ran it with few modifications. In one scenario, I did not get what I was looking for. Here is the code -
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.RecursiveAction;
public class MyRecursiveAction extends RecursiveAction{
private long workload = 0;
public MyRecursiveAction(long workload) {
this.workload = workload;
}
#Override
protected void compute() {
if(this.workload > 16) {
System.out.println("Splitting workload :: " + this.workload);
List<MyRecursiveAction> subtasks = new ArrayList<MyRecursiveAction>();
subtasks.addAll(createSubtasks());
for(RecursiveAction subtask : subtasks) {
subtask.fork();
}
}else {
System.out.println("Doing work myself1 " + this.workload);
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Done it ya " + this.workload);
}
}
private List<MyRecursiveAction> createSubtasks() {
List<MyRecursiveAction> subTasks = new ArrayList<>();
MyRecursiveAction subtask1 = new MyRecursiveAction(this.workload / 2);
MyRecursiveAction subtask2 = new MyRecursiveAction(this.workload / 2);
subTasks.add(subtask1);
subTasks.add(subtask2);
return subTasks;
}
public static void main(String[] args) {
MyRecursiveAction myRecursiveAction = new MyRecursiveAction(24);
ForkJoinPool forkJoinPool = new ForkJoinPool(4);
forkJoinPool.invoke(myRecursiveAction);
}
}
Check the following excerpt -
System.out.println("Doing work myself1 " + this.workload);
try {
Thread.sleep(1000L);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Done it ya " + this.workload);
I added a sleep of 1 second and then I printed another statement. However if I run the code, I don't see that statement getting printed. I don't understand why. Why will that not get printed ? In fact the result of the execution is -
Splitting workload :: 24
Doing work myself1 12
Doing work myself1 12
I was expecting the following line as well - "Done it ya"..
Make workload static and volatile:
private static volatile long workload = 0;
Loose the this.workload for just workload.
Alter if statement to:
if(workload > 0) {
Then you will get to "Done it ya".
I have found the reason as to why the last line was not getting printed.This is because fork works in asynchronous way. So its altogether a different thread which sleeps for some time. In asynchronous programming, there is no need for the main thread to wait for the response to come back unless we via code add some constructs. In this case by the time thread wakes up after 1 second, the main thread is already over.
To force the main thread to wait for execution of other threads, we need to use JOIN.
ForkJoinTask.join(): This method blocks until the result of the computation is done.
So if I add the following block
for(RecursiveAction subtask : subtasks) {
subtask.join();
}
the main thread waits and we get all the expected lines printed on the console.
I am trying to get multithreading more unraveled in my head. I made these three classes.
A global variable class
public partial class globes
{
public bool[] sets = new bool[] { false, false, false };
public bool boolChanged = false;
public string tmpStr = string.Empty;
public int gcount = 0;
public bool intChanged = false;
public Random r = new Random();
public bool gDone = false;
public bool first = true;
}
Drop in point
class Driver
{
static void Main(string[] args)
{
Console.WriteLine("start");
globes g = new globes();
Thread[] threads = new Thread[6];
ParameterizedThreadStart[] pts = new ParameterizedThreadStart[6];
lockMe _lockme = new lockMe();
for (int b = 0; b < 3; b++)
{
pts[b] = new ParameterizedThreadStart(_lockme.paramThreadStarter);
threads[b] = new Thread(pts[b]);
threads[b].Name = string.Format("{0}", b);
threads[b].Start(b);
}
}
}
And then my threading class
class lockMe
{
#region Fields
private string[] words = new string[] {"string0", "string1", "string2", "string3"};
private globes g = new globes();
private object myKey = new object();
private string[] name = new string[] { String.Empty, String.Empty, String.Empty };
#endregion
#region methods
// first called for all threads
private void setName(Int16 i)
{
Monitor.Enter(myKey);
{
try
{
name[i] = string.Format("{0}:{1}", Thread.CurrentThread.Name, g.r.Next(100, 500).ToString());
}
finally
{
Monitor.PulseAll(myKey);
Monitor.Exit(myKey);
}
}
}
// thread 1
private void changeBool(Int16 a)
{
Monitor.Enter(myKey);
{
try
{
int i = getBools();
//Thread.Sleep(3000);
if (g.gcount > 5) { g.gDone = true; return; }
if (i == 3) resets();
else { for (int x = 0; x <= i; i++) { g.sets[x] = true; } }
Console.WriteLine("Thread {0} ran through changeBool()\n", name[a]);
}
finally
{
Monitor.PulseAll(myKey);
Monitor.Exit(myKey);
}
}
}
// thread 2
private void changeInt(Int16 i)
{
Monitor.Enter(myKey);
{
try
{
g.gcount++;
//Thread.Sleep(g.r.Next(1000, 3000));
Console.WriteLine("Thread {0}: Count is now at {1}\n", name[i], g.gcount);
}
finally
{
Monitor.PulseAll(myKey);
Monitor.Exit(myKey);
}
}
}
// thread 3
private void printString(Int16 i)
{
Monitor.Enter(myKey);
{
try
{
Console.WriteLine("...incoming...");
//Thread.Sleep(g.r.Next(1500, 2500));
Console.WriteLine("Thread {0} printing...{1}\n", name[i], words[g.r.Next(0, 3)]);
}
finally
{
Monitor.PulseAll(myKey);
Monitor.Exit(myKey);
}
}
}
// not locked- called from within a locked peice
private int getBools()
{
if ((g.sets[0] == false) && (g.sets[1] == false) && (g.sets[2] == false)) return 0;
else if ((g.sets[0] == true) && (g.sets[1] == false) && (g.sets[2] == false)) return 1;
else if ((g.sets[2] == true) && (g.sets[3] == false)) return 2;
else if ((g.sets[0] == true) && (g.sets[1] == true) && (g.sets[2] == true)) return 3;
else return 99;
}
// should not need locks- called within locked statement
private void resets()
{
if (g.first) { Console.WriteLine("FIRST!!"); g.first = false; }
else Console.WriteLine("Cycle has reset...");
}
private bool getStatus()
{
bool x = false;
Monitor.Enter(myKey);
{
try
{
x = g.gDone;
}
finally
{
Monitor.PulseAll(myKey);
Monitor.Exit(myKey);
}
}
return x;
}
#endregion
#region Constructors
public void paramThreadStarter(object starter)
{
Int16 i = Convert.ToInt16(starter);
setName(i);
do
{
switch (i)
{
default: throw new Exception();
case 0:
changeBool(i);
break;
case 1:
changeInt(i);
break;
case 2:
printString(i);
break;
}
} while (!getStatus());
Console.WriteLine("fin");
Console.ReadLine();
}
#endregion
}
So I have a few questions. The first- is it better to have my global class set like this? Or should I be using a static class with properties and altering them that way? Next question is, when this runs, at random one of the threads will run, pulse/exit the lock, and then step right back in (sometimes like 5-10 times before the next thread picks up the lock). Why does this happen?
Each thread is given a certain amount of CPU time, I doubt that one particular thread is getting more actual CPU time over the others if you are locking all the calls in the same fashion and the thread priorities are the same among the threads.
Regarding how you use your global class, it doesn't really matter. The way you are using it wouldn't change it one way or the other. Your use of globals was to test thread safety, so when multiple threads are trying to change shared properties all that matters is that you enforce thread safety.
Pulse might be a better option knowing that only one thread can actually enter, pulseAll is appropriate when you lock something because you have a task to do, once that task is complete and won't lock the very next time. In your scenario you lock every time so doing a pulseAll is just going to waste cpu because you know that it will be locked for the next request.
Common example of when to use static classes and why you must make them thread safe:
public static class StoreManager
{
private static Dictionary<string,DataStore> _cache = new Dictionary<string,DataStore>(StringComparer.OrdinalIgnoreCase);
private static object _syncRoot = new object();
public static DataStore Get(string storeName)
{
//this method will look for the cached DataStore, if it doesn't
//find it in cache it will load from DB.
//The thread safety issue scenario to imagine is, what if 2 or more requests for
//the same storename come in? You must make sure that only 1 thread goes to the
//the DB and all the rest wait...
//check to see if a DataStore for storeName is in the dictionary
if ( _cache.ContainsKey( storeName) == false )
{
//only threads requesting unknown DataStores enter here...
//now serialize access so only 1 thread at a time can do this...
lock(_syncRoot)
{
if (_cache.ContainsKey(storeName) == false )
{
//only 1 thread will ever create a DataStore for storeName
DataStore ds = DataStoreManager.Get(storeName); //some code here goes to DB and gets a DataStore
_cache.Add(storeName,ds);
}
}
}
return _cache[storeName];
}
}
What's really important to see is that the Get method only single threads the call when there is no DataStore for the storeName.
Double-Check-Lock:
You can see the first lock() happens after an if, so imagine 3 threads simultaneously run the if ( _cache.ContainsKey(storeName) .., now all 3 threads enter the if. Now we lock so that only 1 thread can enter, now we do the same exact if statement, only the very first thread that gets here will actually pass this if statement and get the DataStore. Once the first thread .Add's the DataStore and exits the lock the other 2 threads will fail the second check (double check).
From that point on any request for that storeName will get the cached instance.
So we single threaded our application only in the spots that required it.
How can I distribute the operations, say duplicate the items\actions sent in one pipe to various different pipes which can access the original pipe?
Say I have Parent thread is "Pthread", I want to link it to 4 or 5 child threads, Just like a binary tree. Any operations performed on "Pthread" should be distributed to all the child threads(Something similar to what ESB does in the SOA architecture).
Like A+B should be sent in all the 5 threads\pipes at the same time and processed.
Is there a way to do this?
public class MainThreadEntry {
public void ThreadCreationMethod()
{
List<Future<Object>> listOfResult = null; // listOfResult is list of Integer objects as a result of computation by different threads
ExecutorService executor = Executors.newFixedThreadPool(5); // no of threads to create from main thread
List<EachThreadComputation> list = new ArrayList<MainThreadEntry .EachThreadComputation>();
for (int i = 0; i < 5; i++) {
EachThreadComputation separeateComputaionInnerClass = new EachThreadComputation(1,2); // innerClass Created For Ecah Thread 1,2 parameter can be dynamic
list.add(separeateComputaionInnerClass);
}
try {
listOfResult = executor.invokeAll(list); // call on different threads with 5 separate executionpath for computation
} catch (InterruptedException e) {
}
}
private class EachThreadComputation implements Callable<Object>{
private int A;
private int B;
EachThreadComputation(int A,int B) {
this.A = A;
this.B = B;
}
#Override
public Object call() throws Exception {
return (Integer)A+B
}
}}
I got CyclicBarrier code from oracle page to understand it more. I modified it and now having one doubt.
Below code doesn't terminate but If I uncomment Thread.sleep condition, It works fine.
import java.util.Arrays;
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
class Solver {
final int N;
final float[][] data;
boolean done = false;
final CyclicBarrier barrier;
class Worker implements Runnable {
int myRow;
Worker(int row) {
myRow = row;
}
public void run() {
while (!done) {
processRow(myRow);
try {
barrier.await();
} catch (InterruptedException ex) {
return;
} catch (BrokenBarrierException ex) {
return;
}
}
System.out.println("Run finish for " + Thread.currentThread().getName());
}
private void processRow(int row) {
float[] rowData = data[row];
for (int i = 0; i < rowData.length; i++) {
rowData[i] = 1;
}
/*try {
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}*/
done = true;
}
}
public Solver(float[][] matrix) {
data = matrix;
N = matrix.length;
barrier = new CyclicBarrier(N, new Runnable() {
public void run() {
for (int i = 0; i < data.length; i++) {
System.out.println("Data " + Arrays.toString(data[i]));
}
System.out.println("Completed:");
}
});
for (int i = 0; i < N; ++i)
new Thread(new Worker(i), "Thread "+ i).start();
}
}
public class CyclicBarrierTest {
public static void main(String[] args) {
float[][] matrix = new float[5][5];
Solver solver = new Solver(matrix);
}
}
Why Thread.sleep is required in above code?
I've not run your code but there may be a race condition, here is a scenario that reveals it:
you start the first thread, it runs during a certain amount of time sufficient for it to finish the processRow method call so it sets done to true and then waits on the barrier,
the other threads start but they see that all is "done" so they don't enter the loop and they'll never wait on the barrier, and end directly
the barrier will never be activated as only one of the N threads has reached it
deadlock
Why it is working with the sleep:
when one of the thread starts to sleep it lets the other threads work before marking the work as "done"
the other threads have enough time to work and can themselves reach the barrier
2 seconds is largely enough for 5 threads to end a processing that should not last longer than 10ms
But note that if your system is ovrerloaded it could too deadlock:
the first thread starts to sleep
the OS scheduler lets another application work during more than 2 seconds
the OS scheduler comes back to your application and the threads scheduler chooses the first thread again and lets it terminate, setting done to true
and here again the first scenario => deadlock too
And a possible solution (sorry not tested):
change your while loops for do/while loops:
do
{
processRow(myRow);
...
}
while (!done);