Stack with mode in O(1) - statistics

Is there any way that I can keep track of the stack and get is Mode in O(1) time?
I know I have tried to implement a Min or Max Stack. But this one is a new for me. Any thoughts??

The idea is the same as a min-stack or max-stack, just that now we have to keep track of the count of elements in the stack so we can decide if a newly-pushed element changed the mode. (You can generalize this to any operation where you can provide a (possibly stateful) function (currentValue, beingPushed) -> nextValue and guarantee that popping returns to the previous value.)
public class ModeStack<T> {
private final Deque<T> stack = new ArrayDeque<>(), modeStack = new ArrayDeque<>();
private final Map<T, Integer> count = new HashMap<>();
public ModeStack() {}
public void push(T t) {
stack.push(t);
int tCount = count.getOrDefault(t, 0)+1;
count.put(t, tCount);
if (modeStack.isEmpty())
modeStack.push(t);
else
modeStack.push(tCount > count.get(modeStack.peek())
? t : modeStack.peek());
}
//throws NoSuchElementException if stack is empty
public T pop() {
int newCount = count.get(stack.peek())-1;
//remove unneeded map entries to prevent memory retention
if (newCount == 0)
count.remove(stack.peek());
else
count.put(stack.peek(), newCount);
modeStack.pop();
return stack.pop();
}
//returns null if stack is empty; ties broken by earliest-value-first
public T mode() {
return modeStack.peek();
}
public static void main(String[] args) {
ModeStack<Integer> s = new ModeStack<>();
s.push(1);
System.out.println(s.mode());
s.push(2);
s.push(2);
System.out.println(s.mode());
s.pop();
System.out.println(s.mode());
}
}
Maintaining the map does not change the asymptotic space complexity, as in the worst case all keys map to 1 and the map has size n -- but the element stack and mode stack are also size n, so the total space usage is O(n).

Related

Multithreaded merge sort Stack Overflow error

I'm trying to make a multithreaded merge sort and I've encountered a stack overflow error and I'm not sure what is causing it.
public static void concurrentMergeSort(int[] arr, int threadCount) {
if(threadCount <= 1){
regularMergeSort(arr);
return;
}
int middle = arr.length/2;
int[] left = Arrays.copyOfRange(arr, 0, middle); //Says error here
int[] right = Arrays.copyOfRange(arr, middle, arr.length);
concurrentMergeSort(left);//Says error here
concurrentMergeSort(right);
Thread leftSort = new Thread(new Sorting(left, threadCount));
Thread rightSort = new Thread(new Sorting(right, threadCount));
try{
leftSort.join();
rightSort.join();
}
catch (Exception ex){
ex.printStackTrace();
}
merge(arr, left, right);
}
public static void regularMergeSort(int[] arr){
if(arr.length == 1){
return;
}
int middle = arr.length/2;
int[] left = Arrays.copyOfRange(arr, 0, middle);
int[] right = Arrays.copyOfRange(arr, middle, arr.length);
regularMergeSort(left);
regularMergeSort(right);
merge(arr, left, right);
}
}
I was thinking that maybe it was the thread count never decreasing, but when I modify the thread count I still get the same result. Also it was working until I added a regular merge sort and concurrent merge sort to separate it. I only added the regular merge sort as well because I was barely getting a speed increase from just having the concurrent merge sort method and the main purpose of this modification of merge sort is to increase the time it takes to sort with multithreading.
Your return condition from regularMergeSort is:
if(arr.length == 1)
When middle = 0, you will end up creating an empty array; and this terminating condition won't be hit, and there will be infinite loop. Change this condition to:
if(arr.length <= 1)
And assuming your merge function handles empty arrays, you should be good.

System.IndexOutOfRangeException: Array index is out of range

public static void get_sum_while (int[] num,int len)
{
int sum2,i=0;
while ( i<len)
{
sum2=sum2+num[i];
i++;
}
Console.WriteLine("The sum of the series by while loop is {0}",sum2);
}
public static int get_sum_recur (int[] num,int len)
{
int sum3;
if (len==0)
return sum3=sum3+num[0];
else
{
return sum3=num[len]+get_sum_recur(num,length-1);
}
}
}
Hello this gives sum of the series from three function the first two are okay but recursive did not give it give exception i don't where i go wrong and is it correct way to get sum by recursion?
The idea of summing a list by recursion is to sum the an element of the list with the sum of the same list without the chosen element.
sum([a,b,c,d,e]) = a + sum([b,c,d,e])
So the initial value of the result have to be set to 0.
Then choose an element, for example the first one, add it to the current result and call sum on the rest of the list with the new result.
When list is empty, end recursion.
In pseudo code, because I don't have a C# compiler, this gives :
public static int get_sum_recur(int result, int[] list) {
if (len(list)==0) {
return result; // end recursion
}
else {
return get_sum_recur(result+list[0], list[1:]);
}
}
public static main {
print(get_sum_recur(0, [1,2,3,4,5,6])
}

Lock free list remove operation

I have the following problem definition:
Design a lock-free simple linked list with the following operations:
Add(item): add the node to the beginning (head) of the list
Remove(item): remove the given item from the list
Below is shown the code implemented so far:
public class List<T>
{
private readonly T _sentinel;
private readonly Node<T> _head;
public List()
{
_head = new Node<T>();
_sentinel = default(T);
}
public List(T item)
{
_head = new Node<T>(item);
_sentinel = item;
}
public void Add(T item)
{
Node<T> node = new Node<T>(item);
do
{
node.Next = _head.Next;
}
while (!Atomic.CAS(ref _head.Next, node.Next, node));
}
public void Remove(Node<T> item)
{
Node<T> next;
Node<T> oldItem = item;
if (item.Value.Equals(_sentinel))
return;
item.Value = _sentinel;
do
{
next = item.Next;
if (next == null)
{
Atomic.CAS(ref item.Next, null, null);
return;
}
} while (!Atomic.CAS(ref item.Next, next, next.Next));
item.Value = next.Value;
}
}
The head is actually a dummy (sentinel) node kept for ease of use. The practical head is actually _head.Next.
The problem is on the remove operation when trying to remove the last element of the list:
On the remove part there are two cases:
The node has a following not-null next pointer: then do the CAS operation and steal the value data of the next item removing actually the next item
The problematic case is when the element to remove is the last one in the list:
Do Atomically: If (item == oldItem and item.Next == null) then item = null where oldItem is a pointer to the item to remove;
So I want to do is in the case of removing C node:
if(C==old-C-reference and C.Next == null) then C = null => all atomically
The problem is that I have a CAS only on a single object.
How can I solve this problem atomically? Or is there a better way of doing this remove operation that I'm missing out here?
when removing B we do a trick by copying C's contents to B and removing C: B.Next = C.Next (in the loop) and B.Value = C.Value after the move succeeded
So you need to atomically modify two memory locations. CAS in .NET does not support that. You can, however, wrap those two values in another object that can be swapped out atomically:
class ValuePlusNext<T> {
T Value;
Node<T> Next;
}
class Node<T> {
ValuePlusNext<T> Value;
}
Now you can write to both values in one atomic operation. CAS(ref Value, new ValuePlusNext<T>(next.Value, next.Value.Next). Something like that.
It is strange that ValuePlusNext has the same structure that your old Node class had. In a sense you are now managing two physical linked list node for each logical one.
while (true) {
var old = item.Value;
var new = new ValuePlusNext(...);
if (CAS(ref Value, old, new)) break;
}

Stack size difference for Thread and Process

I have recently observed in Java (while implementing a deep recursive function call), that the stack size for thread is more than the process.
With this I mean, E.g. The thread could execute approx 30,000 recursive calls
while the program without thread could only go to 10,000 recursive calls to the same function.
Can any one suggest why is it so?
For better understanding and context, Please try to run the Java code as it is and see the messages printout on the console....
package com.java.concept;
/**
* This provides a mechanism to increase the call stack size, by starting the thread in the caller we can increase it
* Result were 3 times higher
*/
public class DeepRecursionCallStack {
private static int level = 0;
public static long fact(int n) {
level++;
return n < 2 ? n : n * fact(n - 1);
}
public static void main(String[] args) throws InterruptedException {
Thread t = new Thread(null, null, "DeepRecursionCallStack", 1000000) {
#Override
public void run() {
try {
level = 0;
System.out.println(fact(1 << 15));
} catch (StackOverflowError e) {
System.err.println("New thread : true recursion level was " + level);
System.err.println("New thread : reported recursion level was "
+ e.getStackTrace().length);
}
}
};
t.start();
t.join();
try {
level = 0;
System.out.println(fact(1 << 15));
} catch (StackOverflowError e) {
System.err.println("Main code : true recursion level was " + level);
System.err.println("Main code : reported recursion level was "
+ e.getStackTrace().length);
}
}
}

How to make object (a mutable stack) thread-safe?

How to make Scala object thread-safe.
class Stack {
case class Node(value: Int, var next: Node)
private var head: Node = null
private var sz = 0
def push(newValue: Int) {
head = Node(newValue, head)
sz += 1
}
def pop() = {
val oldNode = head
head = oldNode.next
oldNode.next = null
sz -= 1
}
def size = sz //I am accessing sz from two threads
}
This class is clearly not threadsafe. I want to make it threadsafe.
Thanks in Advance,
HP
Just because it's fun, you can also make this thread-safe by popping head into an AtomicReference and avoiding synchronized altogether. Thusly:
final class Stack {
private val head = new AtomicReference[Node](Nil)
#tailrec
def push(newValue: Int) {
val current = head.get()
if (!head.compareAndSet(current, Node(newValue, current))) {
push(newValue)
}
}
#tailrec
def pop(): Option[Int] = head.get() match {
case current # Cons(v, tail) => {
if (!head.compareAndSet(current, tail))
pop()
else
Some(v)
}
case Nil => None
}
def size = {
def loop(node: Node, size: Int): Int = node match {
case Cons(_, tail) => loop(tail, size + 1)
case Nil => size
}
loop(head.get(), 0)
}
private sealed trait Node
private case class Cons(head: Int, tail: Node) extends Node
private case object Nil extends Node
}
This avoids locking entirely and provides substantially better throughput than the synchronized version. It's worth noting though that this sort of fake thread-safe data structure is rarely a good idea. Handling synchronization and state management concerns at the level of a data structure is a bit like trying to handle IO exceptions within an XML parser: you're trying to solve the right problem in the wrong place and you don't have the information needed to do that. For example, the stack above is perfectly safe, but it's certainly not consistent across operations (e.g. you could push and subsequently pop onto a stack and get None as a result).
Your better option is to use an immutable stack (like List) and throw that into an AtomicReference if you need shared mutable state.
To my mind, the easiest way to make this meaningfully thread-safe would be as follows:
class Stack {
case class Node(value: Int, var next: Node)
private var head: Node = null
private var sz : Int = 0
def push(newValue: Int) {
synchronized {
head = Node(newValue, head)
sz += 1
}
}
def pop() : Option[Int] = {
synchronized {
if ( sz >= 1 ) {
val ret = Some(head.value)
val oldNode = head
head = oldNode.next
oldNode.next = null
sz -= 1
ret
} else {
None
}
}
}
def size = synchronized { sz }
}
This implementation would allow you to ensure that push's and pop's would be atomic, with pop returning a Some wrapping the value it removed from the top of the stack or None if the stack was already empty.
As a note, access to the size is synchronized, but you have no way of guaranteeing that it will be correct at any point after it is returned, since multiple threads are able to access the stack, potentially altering its size. If you really do need to know the size exactly accurately, you would have to go about this differently, synchronizing on the whole stack when you use it.

Resources