Really what is event in Threads? [closed] - multithreading

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Thread is formally a sequence of events.
Some of the events mentioned below
Assign to a shared variable
Assign to a local variable
Invoke method
Return from method
So here, It means instruction execution and events are the same or not.
I need to know the difference between the event and instruction execution if they are different?
Can anyone explain what is called an event?
Threads and these events can be seen as state diagrams where threads (programming counter, local variables) are states and events are transitions.
Whenever an event happens thread state may change.
Thanks in advance

Internal event is an instruction execution. External event is a mean of communication between threads. They are implemented by special kinds of instructions, which can safely be executed on parallel threads (CAS, compare-and-set, compare-and-swap). The ulimate goal of external event is to pass signal from one thread to another. Usually it is done using a buffer, that is, one thread puts signal in a buffer, and another thread extracts that signal, waithing if no signals are ready.

Related

Understanding node.js [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have started reading node.js. I have a few questions:
Is node better than multi-threading just because it saves us from caring about deadlocks and reduces thread creation overhead, or are there are other factors too? Node does use threads internally, so we can't say that it saves thread creation overhead, just that it is managed internally.
Why do we say that node is not good for multi-core processors? It creates threads internally, so it must be getting benefits of multi-core. Why do we say it is not good for CPU intensive applications? We can always fork new processes for CPU intensive tasks.
Are only functions with callback dispatched as threads or there are other cases too?
Non-blocking I/O can be achieved using threads too. A main thread may be always ready to receive new requests. So what is the benefit?
Correct.
Node.js does scale with cores, through child processes, clusters, among other things.
Callbacks are just a common convention developers use to implement asynchronous methods. There is no technical reason why you have to include them. You could, for example, have all your async methods use promises instead.
Everything node does could be accomplished with threads, but there is less code/overhead involved with node.js's asynchronous IO than there is with multi-threaded code. You do not, for example, need to create an instance of thread or runnable every time like you would in Java.

Does one Node.js thread block the other? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Using Node.js and one single CPU virtual instance, if I put a worker thread and a web thread on the same node, would one block the other? Would I require two CPUs for them to run perfectly in parallel?
Yes, one would block another if it is all synchronous code.
Since you only have one virtual CPU instance (assuming not hyperthreaded), at the core level, the CPU only takes instructions in a synchronous fashion: 1, then 2, then 3, then 4.
Therefor in theory, if one worker had something like this, it would block:
while (true) {
doSomething();
}
Disclaimer: I'm not sure whether the OS kernel would handle anything regarding blocking instructions.
However, Node.js runs all I/O in the event loop, along with tasks that you explicitly state to be ran in the event loop (process.nextTick(), setTimeout()...). The way the event loop works is explained well here, so I won't go into much detail - however, the only blocking part about Node.js is synchronous running code, like the example above.
So, long story short: since your web worker uses Node.js, and the http module is an async module, your web worker will not block. Since your worker thread also uses Node.js, assuming it executes code that is launched when an event occurs (a visit to your website, for example), it will not block.
To run synchronous code perfectly in parallel, you would need two CPUs. However, assuming it is asynchronous, it should work just fine on one CPU.

Is this a sane architecture for a multi-user network server? (How much overhead does pipes-concurrency introduce?) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I currently have it so that there is one thread handling the accept loop, one main thread for doing all the stateful logic stuff, and then 2 threads per connected client. One client thread is handling the input pipeline and using pipes-concurrency to send messages to the main logic thread. The other client thread is handling the output pipeline, getting messages from the main logic thread and sending them to the client.
My reasoning for doing it this way is that the main logic thread can use worker threads to do pure computations on an immutable state, then do all the state changes at once and loop back around with the new state. That way I can make use of multiple CPUs without having to worry about the problems of concurrent state modifications.
Is the overhead of STM/pipes-concurrency small enough that this is a reasonable approach when I will end up with a couple thousand connected clients sending two or three messages per second each?
Haskell green threads are cheap enough that I would definitely recommend the approach of 2 threads per client. Without seeing details, I can't comment on whether STM will be a bottleneck or not, but that's going to depend on your implementation. STM can definitely handle that level of workload, assuming it's used correctly.

Notify gpio interrupt to user space from a kernel module [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have a code which detects GPIO interrupt in a kernel module. Now,I am looking for a mechanism to notify user space upon detecting gpio interrupt from kernel module. Any example / code snippet with certain advantages/disadvantages over different options? I would appreciate your response.
Take a look at the GPIO keyboard driver (drivers/input/keyboard/gpio_keys.c). It is a good starting point for your problem.
In the userspace you then listen (some blocking read for example, or just tail to test) to /dev/input/yourevent for events.
You can send a signal to user space thread from kernel API, which can help u run non-blocking:
send_sig(int sig, struct task_struct *p, int priv)
But there is a limitation: u need to be aware of pid of user thread in Kernel. You can over come this by writing pid of user process via /proc and then kernel reading the pid. With this arrangement, when there is an interrupt, kernel can send signal to user thread. In case your process restarts or gets killed, you will have to update the pid via proc.
If i were you, i would have preferred to do this unless i dont want to transfer data from kernel to user. For data transfer requirement, i would have used Netlink or some other mechanism.
You can:
(1) Send a signal to the user application, or
(2) implement file_operations->poll method, use poll_wait and wait queue to wake user application when interrupt occur.

What is event-driven programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What is event-driven programming and has event-driven programming anything to do with threading? I came to this question reading about servers and how they handle user requests and manage data. If user sends request, server begins to process data and writes the state in a table. Why is that so? Does server stop processing data for that user and start to process data for another user or processing for every user is run in a different thread (multithread server)?
Event driven programming != Threaded programming, but they can (and should) overlap.
Threaded programming is used when multiple actions need to be handled by a system "simultaneously." I use simultaneously loosely as most OS's use a time sharing model for threaded activity, or at least they do when there are more threads than processors available. Either way, not germane to your Q.
I would use threaded programming when I need an application to do two or more things - like receiving user input from a keyboard (thread 1) and running calculations based upon the received input (thread 2).
Event driven programming is a little different, but in order for it to scale, it must utilize threaded programming. I could have a single thread that waits for an event / interrupt and then processes things on the event's occurrence. If it were truly single threaded, any additional events coming in would be blocked or lost while the first event was being processed. If I had a multi-threaded event processing model then additional threads would be spun up as events came in. I'm glossing over the producer / worker mechanisms required, but again, not germane to the level of your question.
Why does a server start processing / storing state information when an event is received? Well, because it was programmed to. :-) State handling may or may not be related to the event processing. State handling is a separate subject from event processing, just like events are different than threads.
That should answer all of the questions you raised. Jonny's first comment / point is worth heeding - being more specific about what you don't understand will get you better answers.

Resources