Confused with what parameter int "i" is in this implementation of a priority queue - priority-queue

So I'm trying to understand how to implement a priority queueimage of priorityQueue implementation, but the parameter "int i" is confusing me. Is this the index from where the user wants to start heapifying? For the heapify method of the priority queue, isn't it always supposed to start at the last leaf node? if so, why do we need this variable "int i?"

Related

Can a simple write to one variable modify the state of another?

Taken from Operating Systems - Principles and Practices Vol 3. Chapter 5 Question 4. This isn't homework, I'm now curious.
Suppose that you mistakenly create an automatic (local) variable v in one
thread t1 and pass a pointer to v to another thread t2. Is it possible that a
write by t1 to some variable other than v will change the state of v as
observed by t2? If so, explain how this can happen and give an example. If
not, explain why not.
I don't think this is possible unless T1 declared v within a lower level scope, which was popped off the stack via modification to the afore-mentioned unrelated variable, followed by a swift reallocation of that memory to a new variable. But what I've described isn't a simple write, and that new variable isn't v anymore; at least not to T1.
Were this possible, I think it would imply that a simple write to any unrelated variable could somehow change another memory location even in single threaded programs.
So is this possible?

Parameterized FIFO instantiation in Verilog

I wanted to have a parameterized FIFO instantiation so that I can call a single FIFO instance with change in depth(parameter).
e.g. I have written a code for FIFO with depth as a parameter.
I will know the depth of the FIFO only by the configuration from Microprocessor. Based on the register configuration, can i call this FIFO with variable parameter like value?
integer depth_param;
if(config_reg[1])
depth_param <= 128;
else
depth_param <= 512;
genfifo #(depth_param) (.din (din),.wr(wr)....);
fifo module is:
module gen_fifo #(depth = 128)
( din,wr,rd,clk....);
can you please suggest is there a way I can do this?
This is what the LRM says:
Parameters represent constants; hence, it is illegal to modify their
value at run time. However, module parameters can be modified at
compilation time to have values that are different from those
specified in the declaration assignment. This allows customization of
module instances. A parameter can be modified with the defparam
statement or in the module instance statement. Typical uses of
parameters are to specify delays and width of variables.
'run time' means during simulation, after elaboration. A synthesiser doesn't 'run' anything, but what you're doing is effectively "run time", and so is illegal.
This doesn't mean that you can't do it, though. Pass in your FIFO depth as a module port. I'm assuming that you know how to code a FIFO from first principles. If so, you will normally have a constant for the FIFO size; just replace this constant with the value at the port, and find some way to set the memory size. You'll obviously need to be careful when changing the FIFO size - you may need to reset it, for example. If you don't know how to code a FIFO you should ask with an FPGA or an electronics tag.

Safe global state for signal handling

I am toying around with Rust and various UNIX libraries. A use-case that I have right now is that I want to react to POSIX signals. To keep things reasonable I want to create an abstraction over the signal handling so that the rest of my program doesn't have to worry about them as much.
Let's call the abstraction SignalHandler:
struct SignalHandler {
pub signals: Arc<Vec<libc::c_int>>,
}
I would like this signals vector to be filled with all the signals that are received. My real state is more complicated, but let's use this vector as an example.
I want the API to behave like this:
// ← No signals are being captured
let Some(h) = SignalHandler::try_create();
// ← Signals are added to h.signals
// Only one signal handler can be active at a time per process
assert_eq!(None, SignalHandler::try_create());
// ← Signals are added to h.signals
drop(h);
// ← No signals are being captured
The problem is that registering a signal handler (e.g. using the nix crate) requires a pointer to a C function:
use nix::sys::signal;
let action = signal::SigAction::new(handle_signal, signal::SockFlag::empty(), signal::SigSet::empty());
signal::sigaction(signal::SIGINT, &action);
I can't pass the signals vector to the handle_signal function, since it needs to have the C ABI and thus can't be a closure. I would like to give out a Weak<_> pointer to that function somehow. This probably means using global state.
So the question is: what data structure should I use for global state that can either be "unset" (i.e. no signals vector) or atomically "set" to some mutable state that I initialize in try_create?
For this type of global state, I would recommend using the lazy_static crate. You can use a macro to define a lazily-evaluated, mutable global reference. You may be able to get a way with a global Option<T> variable with that.
That is one problem with this situation though. A big issue you will run into is that it is hard to do what you want only inside of a signal handler. Since a signal handler must be re-entrant, any type of locks are out as well as any memory allocation (unless the memory allocator used is also re-entrant). That means an Arc<Mutex<Vec<T>>> type or something similar will not work. You potentially already know and are dealing with that in some way though.
Depending on your needs, I might point you towards the chan_signal crate, which is an abstraction over signals which uses a thread and the sigwait syscall to receive signals.
Hope that helps, another interesting resource to look at would be the signalfd function which creates a file descriptor to enqueue signals on. The nix crate has a binding to that as well.

Incorrect synchronization in go lang

While I was taking a look on the golang memory model document(link), I found a weird behavior on go lang. This document says that below code can happen that g prints 2 and then 0.
var a, b int
func f() {
a = 1
b = 2
}
func g() {
print(b)
print(a)
}
func main() {
go f()
g()
}
Is this only go routine issue? Because I am curious that why value assignment of variable 'b' can happen before that of 'a'? Even if value assignment of 'a' and 'b would happen in different thread(not in main thread), does it have to be ensured that 'a' should be assigned before 'b' in it's own thread?(because assignment of 'a' comes first and that of 'b' comes later) Can anyone please tell me about this issue clearly?
Variables a and b are allocated and initialized with the zero values of their respective type (which is 0 in case of int) before any of the functions start to execute, at this line:
var a, b int
What may change is the order new values are assigned to them in the f() function.
Quoting from that page: Happens Before:
Within a single goroutine, reads and writes must behave as if they executed in the order specified by the program. That is, compilers and processors may reorder the reads and writes executed within a single goroutine only when the reordering does not change the behavior within that goroutine as defined by the language specification. Because of this reordering, the execution order observed by one goroutine may differ from the order perceived by another. For example, if one goroutine executes a = 1; b = 2;, another might observe the updated value of b before the updated value of a.
Assignment to a and b may not happen in the order you write them if reordering them does not make a difference in the same goroutine. The compiler may reorder them for example if first changing the value of b is more efficient (e.g. because its address is already loaded in a register). If changing the assignment order would (or may) cause issue in the same goroutine, then obviously the compiler is not allowed to change the order. Since the goroutine of the f() function does nothing with the variables a and b after the assignment, the compiler is free to carry out the assignments in whatever order.
Since there is no synchronization between the 2 goroutines in the above example, the compiler makes no effort to check whether reordering would cause any issues in the other goroutine. It doesn't have to.
Buf if you synchronize your goroutines, the compiler will make sure that at the "synchronization point" there will be no inconsistencies: you will have guarantee that at that point both the assignments will be "completed"; so if the "synchronization point" is before the print() calls, then you will see the assigned new values printed: 2 and 1.

How does thread context-switching work with global variable?

I have been confused at this question:
I have C++ function:
void withdraw(int x) {
balance = balance - x;
}
balance is a global integer variable, which equals to 100 at the start.
We run the above function with two different thread: thread A and thread B. Thread A run withdraw(50) and thread B run withdraw(30).
Assuming we don't protect balance, what is the final result of balance after running those threads in following sequences?
A1->A2->A3->B1->B2->B3
B1->B2->B3->A1->A2->A3
A1->A2->B1->B2->B3->A3
B1->B2->A1->A2->A3->B3
Explanation:
A1 means OS execute the first line of function withdraw in thread A, A2 means OS execute the second line of function withdraw in thread A, B3 means OS execute the third line of function withdraw in thread B, and so on.
The sequence is how OS schedule thread A & B presumably.
My answer is
20
20
50 (Before context switch, OS saves balance. After context switch, OS restore balance to 50)
70 (Similar to above)
But my friend disagrees, he said that balance was a global variable. Thus it is not saved in stack, so it does not affected by context switching. He claimed that all 4 sequences result in 20.
So who is right? I can't find fault in his logic.
(We assume we have one processor that can only execute one thread at a time)
Consider this line:
balance = balance - x;
Thread A reads balance. It is 100. Now, thread A subtracts 50 and ... oops
Thread B reads balance. It is 100. Now, thread B subtracts 30 and updates the variable, which is now 70.
...thread A continues now updates the variable, which is now 50. You've just lost the work that Thread B.
Threads don't execute "lines of code" -- they execute machine instructions. It does not matter if a global variable is affected by context switching. What matters is when the variable is read, and when it is written, by each thread, because the value is "taken off the shelf" and modified, then "put back". Once the first thread has read the global variable and is working with the value "somewhere in space", the second thread must not read the global variable until the first thread has written the updated value.
Unless the threading standard you are using specifies, then there's no way to know. Most typical threading standards don't, so typically there's no way to know.
Your answer sounds like nonsense though. The OS has no idea what balance is nor any way to do anything to it around a context switch. Also, threads can run at the same time without context switches.
Your friend's answer also sounds like nonsense. How does he know that it won't be cached in a register by the compiler and thus some of the modifications will stomp on previous ones?
But the point is, both of you are just guessing about what might happen to happen. If you want to answer this usefully, you have to talk about what is guaranteed to happen.
Clearly homework, but saved by doing actual work before asking.
First, forget about context switching. Context switching is totally irrelevant to the problem. Assume that you have multiple processors, each executing one thread, and each progressing at an unknown speed, stopping and starting at unpredictable times. Even better, assume that this stopping and storing is controlled by an enemy, who will try to break your program.
And because context switching is irrelevant, the OS will not save or restore anything. It won't touch the variable balance. Only your two threads will.
Your friend is absolutely, totally wrong. It's quite the opposite. Since balance is a global variable, both threads can read and write it. But you don't only have the problem that they might read and write it in unknown order, as you examined, it is worse. They could access it at the same time, and if one thread modifies data while another reads it, you have a race condition and anything at all could happen. Not only could you get any result, your program could also crash.
If balance was a local variable saved on the stack, then both threads would have each its own variable, and nothing bad would happen.
Consider this line:
balance = balance - x;
Thread A reads balance. It is 100. Now, thread A subtracts 50 and ... oops
Thread B reads balance. It is 100. Now, thread B subtracts 30 and updates the variable, which is now 70.
...thread A continues and updates the variable, which is now 50. You've just completely lost the work of Thread B.
Threads don't execute "lines of code" -- they execute machine instructions. It does not matter if a global variable is affected by context switching. What matters is when the variable is read, and when it is written, by each thread, because the value is "taken off the shelf" and modified, then "put back". Once the first thread has read the global variable and is working with the value "somewhere in space", the second thread cannot read the global variable until the first thread has written the updated value.
Simple and short answer for c++: Unsynchronized access to a shared variable is undefined behavior, so anything can happen. The value can e.g. be 100,70,50,20,42 or -458995. The program could crash or not. And in theory its even allowed to order pizza.
The actual machine code that is executed is usually far away from what your program looks like and in the case of undefined behavior, you are no longer guaranteed, that the actual behavior has anything to do with the c++ code you have written.

Resources