sharing a TCP socket / port is easy using node.js cluster, but it does not seem possible to do this with UDP dgram.
is there a way to do this either using cluster, by sharing file descriptors between processes or other?
I had a lot of problems with this because node.js doesn't really "fork", it "spawns" or "fork/execs". When I used cluster for a UDP server only ONE of the child processes would receive packets, the last one bound. If you simply "fork()", the OS round robins the incoming packets to each of the children. If you "spawn()" you run into inheritance rights problems with file/socket handles, options have to be set etc. and the underlying node.js udp server may not have applied those options.
I had to write my own extension that simply called the underlying OS fork() and made it work like a normal forking, network server.
Windows doesn't have fork() so this approach won't work and it's probably why node.js doesn't have a normal, plain, garden variety fork(). Doing that would make it non portable to Windows.
1) create a directory, I called mine "util".
2) Put these two files in that directory.
------------------- cut here, name the following "util.cc" -------
#include <v8.h> //needed for extension infrastructure
#include <node.h> //needed for extension infrastructure
#include <iostream> // not part of extension infrastructure, just for the code I'm adding and only while developing to output debugging messages
using namespace node;
using namespace v8;
// The following two functions are examples of the minimum required for a node.js extension that does anything
static Handle<Value> spoon(const Arguments& args)
{
pid_t rval = fork();
if (rval < 0)
{
return ThrowException(Exception::Error(String::New("Unable to fork daemon, pid < 0.")));
}
Handle<Value> n = v8::Number::New(rval);
return n;
}
static Handle<Value> pid(const Arguments& args)
{
pid_t rval = getpid();
Handle<Value> n = v8::Number::New(rval);
return n;
}
extern "C" void init(Handle<Object> target)
{
NODE_SET_METHOD(target, "fork", spoon);
NODE_SET_METHOD(target, "pid", pid);
}
-------- cut here, name the following "wscript" -------
def set_options(opt):
opt.tool_options("compiler_cxx")
def configure(conf):
conf.check_tool("compiler_cxx")
conf.check_tool("node_addon")
def build(bld):
obj = bld.new_task_gen("cxx", "shlib", "node_addon")
obj.cxxflags = ["-g", "-D_FILE_OFFSET_BITS=64", "-D_LARGEFILE_SOURCE", "-Wall"]
obj.target = "util"
obj.source = "util.cc"
--------- end of cutting, spare us the cutter ------
3) run "node-waf configure"
If that goes well,
4) run "node-waf"
5) a new directory called "build" will be created, and your extension "build/default/util.node" will have been created. Copy that wherever and use it from within your node program like:
var util = require("util.node");
var pid = util.fork();
Also included in there is a util.pid() function because the process.pid doesn't work right after forking. It delivers the pid of the parent process.
I'm a beginner node extension writer so if this is a naive approach, oh well, but it has served me well so far. Any improvements, as in "simplifications" would be greatly appreciated.
This issue was resolved in node.js v0.10
Related
Have a query on timeout calling and GMainContext. It is really confusing to me
Suppose I have the codes below (a bit incomplete, just for demonstration). I use normal Pthreads to create a thread. Within the thread, I run Glib functionality and created a GMainContext (stored within l_app.context).
I then created a source to run the function check_cmd iteratively at about 1 sec interval. This callback (or could we call it a thread?) will check for command from other threads( Pthreads not shown here for update in cmd status). From here onwards, there are two specific command
One to start a looping function
The other to end the looping function
I have done and thought of two ways to create the function and set them to run iteratively.
To create another timeout
using the same method of creating check_cmd
Essentially both to me are pretty much essentially the same method, when I tried both of them. Plan A (as I called it) does not work but Plan B ...actually run at least once. So I would like to know how to fix them...
Or maybe I should use g_source_add_child_source() instead?
In Summary, my question is
when you created a new context and push it to become the default context, do all subsequent function that require main_context will refer to this context?
in a nut shell, how do you add new sources when a loop is already running, ie like my cases
lastly, it is okay to quit the main loop within the callback you have created?
Here is my pseudocode
#include <glib.h>
#include <dirent.h>
#include <errno.h>
#include <pthread.h>
#define PLAN_A 0
typedef struct
{
GMainContext *context;
GMainLoop *loop;
}_App;
static _App l_app;
guint gID;
gboolean
time_cycle(gpointer udata)
{
g_print("I AM THREADING");
return true;
}
gboolean
check_cmd_session(NULL )
{
while(alive) /// alive is a boolean value that is shared with other threads(not shown)
{
if(start)
{
/// PLAN A
//// which context does this add to ??
#if PLAN_A
g_timeout_add_seconds(10, (GSourceFunc)timeout, NULL);
#else
/// or should i use PLAN B
GSource* source = g_timeout_source_new(1000);
gID = g_source_set_callback(source,
(GSourceFunc)time_cycle,
NULL,
NULL);
g_source_attach(source, l_app.context);
#endif
}
else
{
#if PLAN_A
g_source_remove(gID);
#else
}
}
g_main_loop_quit (l_app.loop);
return FALSE;
}
void*
liveService(Info *info)
{
l_app.context = g_main_context_new ();
g_main_context_push_thread_default(l_app.context);
GSource* source = g_timeout_source_new(1000);
g_source_set_callback(source,
(GSourceFunc)check_cmd_session,
NULL,
NULL);
/// make it run
g_source_attach(source, l_app.context);
g_main_loop_run (l_app.loop);
pthread_exit(NULL);
}
int main()
{
pthread_t tid[2];
int thread_counter = 0;
err = pthread_create(&(tid[thread_counter]), NULL, &live, &info);
if (err != 0)
{
printf("\n can't create live thread :[%s]", strerror(err));
}
else
{
printf("--> Thread for Live created successfully\n");
thread_counter++;
}
/**** other threads are build not shown here */
for(int i = 0; i < 2; i++)
{
printf("Joining the %d threads \n", i);
pthread_join(tid[i],NULL);
}
return 0;
}
In Summary, my question is
when you created a new context and push it to become the default context, do all subsequent function that require main_context will
refer to this context?
Functions that are documented as using the thread-default main context will use the GMainContext which has been most recently pushed with g_main_context_push_thread_default().
Functions that are documented as using the global default main context will not. They will use the GMainContext which is created at init time and which is associated with the main thread.
g_timeout_add_seconds() is documented as using the global default main context. So you need to go with plan B if you want the timeout source to be attached to a specific GMainContext.
in a nut shell, how do you add new sources when a loop is already running, ie like my cases
g_source_attach() works when a main context is being iterated.
lastly, it is okay to quit the main loop within the callback you have created?
Yes, g_main_loop_quit() can be called at any point.
From your code, it looks like you’re not creating a new GMainLoop for each GMainContext and are instead assuming that one GMainLoop will somehow work with all GMainContexts in the process. That’s not correct. If you’re going to use GMainLoop, you need to create a new one for each GMainContext you create.
All other things aside, you might find it easier to use GLib’s threading functions rather than using pthread directly. GLib’s threading functions are portable to other platforms and a little bit easier to use. Given that you’re already linking to libglib, using them would cost nothing extra.
In code like this in a daemon:
// run as root, after initgroups(...), setgid(...)
setuid(user);
const char* args[] = {"./userbinary",0};
execv("userbinary", args);
_exit(1);
there's an obvious problem where the user can attach to the process between the calls to setuid and exec[lvpe], and read out all the process's memory, including sensitive variables and state.
A workaround I use is like this (obviously, all error handling ommitted):
// run as root in daemon
const char* args = {"/usr/bin/mysetuid", uidStr, "./userbinary", 0};
execv("/usr/bin/mysetuid", args);
_exit(1);
// mysetuid.c:
int main(int arc, char* argv[]) {
setuid(atoi(argv[1]));
execv(argv[2], argv+2);
exit(1);
}
What is the "standard" way of doing this operation? Using a helper binary seems safest, but I can't find other applications that do this. For example, OpenSSH just relies on the fact that each user's connection gets its own process, so the setuid is always done from a process that has pretty much a blank slate.
I'm writing a client-server (TCP) program in C on a Unix system. The client sends some information and the server answers. There's only one connection per child process. New connections use pre-running processes from a pool, and the pool size is dynamic, so if the number of free processes (processes not servicing a client) drops too low, it should create new processes, and likewise if it gets too high extra processes should be terminated.
This is my server code. Every connection make a new child process using fork(). Each connection runs in a new process. How can I make a dynamic pool like I explained above?
int main(int argc, char * argv[])
{
int cfd;
int listener = socket(AF_INET, SOCK_STREAM, 0); //create listener socket
if(listener < 0){
perror("socket error");
return 1;
}
struct sockaddr_in addr;
addr.sin_family = AF_INET;
addr.sin_port = htons(PORT);
addr.sin_addr.s_addr = htonl(INADDR_ANY);
int binding = bind(listener, (struct sockaddr *)&addr, sizeof(addr));
if(binding < 0){
perror("binding error");
return 1;
}
listen(listener, 1); //listen for new clients
signal(SIGCHLD,handler);
int pid;
for(;;) // infinity loop on server
{
cfd = accept(listener, NULL, NULL); //client socket descriptor
pid = fork(); //make child proc
if(pid == 0) //in child proc...
{
close(listener); //close listener socket descriptor
... //some server actions that I do.(receive or send)
close(cfd); // close client fd
return 0;
}
close(cfd);
}
If you have several processes blocked in accept on the same listen socket, then a new connection that comes in will get delivered to one of them. (Depending, several may wake up, but only one will actually get the connection). So you need to fork several children after listen, but before accept. After handling a request, the child goes back to accept instead of exit. That handles (1) and (2).
(3) is harder. You need some form of IPC. Typically, you'd have a parent process that just manages having the right number of children. Your child processes need to use IPC to tell the parent how busy they are. The parent can then either fork more children (which go into the accept loop above) or send signals to children to tell them to finish up and exit. It should also handle waiting on children, handle unexpected deaths, etc.
The IPC you want to use is probably shared memory. Your two options are SysV (shmget) and POSIX (shm_open`) shared memory. You probably want the latter if available. You'll have to deal with synchronizing access (both POSIX and SysV provide semaphores to help with this, again prefer POSIX) or using atomic access only.
(You probably don't actually want a process to exit the instant there are more than X free children, that'll lead to repeatedly reaping and spawning them, which is expensive. Instead you probably want some measure of how utilized they were over the last second... So your data is more complicated than a bitmap of in use/free.)
There are a lot of daemons that work like this, so you can fairly easily find code examples. Of course, if you go look at Apache, you'll probably find it more complicated, to get good performance and be portable everywhere.
I have a couple of tasks to do with arduino but one of them takes very long time, so I was thinking to use threads to run them simultaneously.
I have an Arduino Mega
[Update]
Finally after four years I can install FreeRTOS in my arduino mega. Here is a link
In short: NO.
But you may give it a shot at:
http://www.kwartzlab.ca/2010/09/arduino-multi-threading-librar/
(Archived version: https://web.archive.org/web/20160505034337/http://www.kwartzlab.ca/2010/09/arduino-multi-threading-librar
Github: https://github.com/jlamothe/mthread
Not yet, but I always use this Library with big projects:
https://github.com/ivanseidel/ArduinoThread
I place the callback within a Timer interrupt, and voilá! You have pseudo-threads running on the Arduino...
Just to make this thread more complete: there are also protothreads which have very small memory footprint (couple bytes if I remember right) and preserve variables local to thread; very handy and time saving (far less finite state machines -> more readable code).
Examples and code:
arduino-class / ProtoThreads wiki
Just to let you know what results you may expect: serial communication # 153K6 baudrate with threads for: status diodes blinking, time keeping, requested functions evaluation, IO handling and logic and all on atmega328.
Not real threading but TimedActions are a good alternative for many uses
http://playground.arduino.cc/Code/TimedAction#Example
Of course, if one task blocks, the others will too, while threading can let one task freeze and the others will continue...
No you can't but you can use Timer interrupt.
Ref : https://www.teachmemicro.com/arduino-timer-interrupt-tutorial/
The previous answer is correct, however, the arduino generally runs pretty quick, so if you properly time your code, it can accomplish tasks more or less simultaneously.
The best practice is to make your own functions and avoid putting too much real code in the default void loop
You can use arduinos
It is designed for Arduino environment. Features:
Only static allocation (no malloc/new)
Support context switching when delaying execution
Implements semaphores
Lightweight, both cpu and memory
I use it when I need to receive new commands from bluetooth/network/serial while executing the old ones and the old ones have delay in them.
One thread is the sever thread that does the following loop:
while (1) {
while ((n = Serial.read()) != -1) {
// do something with n, like filling a buffer
if (command_was_received) {
arduinos_create(command_func, arg);
}
}
arduinos_yield(); // context switch to other threads
}
The other is the command thread that executes the command:
int command_func(void* arg) {
// move some servos
arduinos_delay(1000); // wait for them to move
// move some more servos
}
Arduino does not support multithread programming.
However there have been some workarounds, for example the one in this project (you can install it also from the Arduino IDE).
It seems you have to define the schedule time yourself while in a real multithread environment it is the OS that decides when to execute tasks.
Alternatively you can use protothreads
The straight answer is No No No!. There are some alternatives but you can't expect a perfect multi threading functionality from an arduino mega. You can use arduino due or lenado for multithreading like below-
void loop1(){
}
void loop2(){
}
void loop3(){
}
Normally, I handle those types of cases in backend. You can run the main code in a server while using Arduino to just collect inputs and show outputs. In such cases I would prefer nodemcu which has built in wifi.
Thread NO!
Concurrent YES!
You can run different tasks concurrently with FreeRTOS library.
https://www.arduino.cc/reference/en/libraries/freertos/
void TaskBlink( void *pvParameters );
void TaskAnalogRead( void *pvParameters );
// Now set up two tasks to run independently.
xTaskCreate(
TaskBlink
, (const portCHAR *)"Blink" // A name just for humans
, 128 // Stack size
, NULL
, 2 // priority
, NULL );
xTaskCreate(
TaskAnalogRead
, (const portCHAR *) "AnalogRead"
, 128 // This stack size can be checked & adjusted by reading Highwater
, NULL
, 1 // priority
, NULL );
void TaskBlink(void *pvParameters) // This is a task.
{
(void) pvParameters;
// initialize digital pin 13 as an output.
pinMode(13, OUTPUT);
for (;;) // A Task shall never return or exit.
{
digitalWrite(13, HIGH); // turn the LED on (HIGH is the voltage level)
vTaskDelay( 1000 / portTICK_PERIOD_MS ); // wait for one second
digitalWrite(13, LOW); // turn the LED off by making the voltage LOW
vTaskDelay( 1000 / portTICK_PERIOD_MS ); // wait for one second
}
}
void TaskAnalogRead(void *pvParameters) // This is a task.
{
(void) pvParameters;
// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
for (;;)
{
// read the input on analog pin 0:
int sensorValue = analogRead(A0);
// print out the value you read:
Serial.println(sensorValue);
vTaskDelay(1); // one tick delay (15ms) in between reads for stability
}
}
Just take care!
When different tasks tried to reach variables at the same time, like i2c communication line or sd card module. Use Semaphores and mutexes
https://www.geeksforgeeks.org/mutex-vs-semaphore/.
Arduino does not supports threading. However, you can do the next best thing and structure your code around state machines running in interleaving.
While there are lots of ways to implement your tasks as state machines, I recommend this library (https://github.com/Elidio/StateMachine). This library abstracts most of the process.
You can create a state machine as a class like this:
#include "StateMachine.h"
class STATEMACHINE(Blink) {
private:
int port;
int waitTime;
CREATE_STATE(low);
CREATE_STATE(high);
void low() {
digitalWrite(port, LOW);
*this << &STATE(high)<< waitTime;
}
void high() {
digitalWrite(port, HIGH);
*this << &STATE(low)<< waitTime;
}
public:
Blink(int port = 0, int waitTime = 0) :
port(port),
waitTime(waitTime),
INIT_STATE(low),
INIT_STATE(high)
{
pinMode(port, OUTPUT);
*this << &STATE(low);
}
};
The macro STATEMACHINE() abstracts the class inheritances, the macro CREATE_STATE() abstracts the state wrapper creation, the macro INIT_STATE() abstracts method wrapping and the macro STATE() abstracts state wrapper reference within the state machine class.
State transition is abstracted by << operator between the state machine class and the state, and if you want a delayed state transition, all you have to do is to use that operator with an integer, where the integer is the delay in millisseconds.
To use the state machine, first you have to instantiate it. Declaring an reference to the class in global space while instantiating it with new on setup function might do the trick
Blink *led1, *led2, *led3;
void setup() {
led1 = new Blink(12, 300);
led2 = new Blink(11, 500);
led3 = new Blink(10, 700);
}
Then you run the states on loop.
void loop() {
(*led2)();
(*led1)();
(*led3)();
}
I am trying to automate the handler equipment(a robot picks a chip and put it onto a hardware platform) with the following requirement:
1.There are 6 sites for the handler , once handler puts a device onto that site, handler will return an errorcode:
code1 for ready to test, code2 for error, and if in process no code have returned.
2.There is a master PC that controls the handler operation, and the communication b/w master and site PCs are using Staf
3.I need to use that code to run some tests(which already implemented and working properly).
Handler puts the device in a FIFO order, first site returns code first, and last site returns code last.
4.The Site PC is acting passively, which master PC will determine when to run and how to run the tests. Site PC will only know if handler is ready then execute the tests.
So my question would be: In this case, for the site-PCs(Windows based with perl and .net enabled), is busy waiting method better or is the wait condition mechanism suits better:
For example: the sample code would be:
void runTestonSite()
{
for(;;)
{
if(returnCode == code1)
{
testStart(arg1,arg2,arg3);
}
}
}
or is there any better way to do this kind of task?
#include <boost/thread.hpp>
void getReturnCode() {
// do stuff
}
void RunTestOnSite() {
// do stuff
}
int main (int argc, char ** argv) {
using namespace boost;
thread thread_1 = thread(getReturnCode);
thread thread_2 = thread(RunTestOnSite);
// do other stuff
thread_2.join();
thread_1.join();
return 0;
}
Please advise,
thanks