Kernel as client, User application as server using netlink - linux

I want to establish connection between kernel module and user application with the kernel as a client. In other words, kernel will send message to the user app, wait for reply, receive reply, and then continue execution.
For example, inside kernel I will send message and then wait for reply.
// inside kernel
nlmsg_unicast();
wait_until_user_reply();
/* process reply */
/* continue execution.. */
while inside user,
while (1) {
// inside user
recvmsg();
/* assembly reply.. */
sendmsg();
}
however, what netlink protocol does is invoking a callback function every time user sends message. What I want is to make kernel wait for a reply from user, then continue the execution. Is waiting in a busy loop on a global variable which is updated inside callback function feasible? I tried but I think that's not a very good solution.
Can I do something like "sleep until a reply come". Can I put the kernel to sleep?

I have resolved this problem using wait_for_completion. It turns out that it wasn't that hard.

Related

Api route is blocked by huge process in event loop

I have a restify api similar to this (i write in pseudocode) :
server.post('api/import')
{
database.write(status of this file.id is pending)
fileModification(req.file)
res.status(200)
res.send(import has started)
} //here I do some file modifications and then i import it to database
server.get('api/import_info')
{
database.select(file status)
} //here I want to see status (is file imported or pending(process is not finished yet))
//In another module after import is finished I update database to
database.write(file.id import status is completed)
Importing file is process that takes about 2 minutes, but even I don't wait for it to finish in api/import when I try to trigger 'info' route my api is blocked
Is it possible that event loop is blocked or maybe connection is not properly closed.
Thanks in advance
I have some ideas about your question.
you can use cluster module Cluster, cluster module can create process depend on your cpu core. When on process blocked, Others process still can work.
you can fork a new process in your api, use the new process handle your task.

Qt deleteLater causes crash when time is changed

We created a Qt HTTP server derived from QTcpServer.
Each incoming client connection is handled in a new thread like this:
void WebClientThread::run()
{
// Configure the web client socket
m_socket = new QTcpSocket();
connect(m_socket, SIGNAL(disconnected()), this, SLOT(disconnected()));
connect(m_socket, SIGNAL (error(QAbstractSocket::SocketError)), this, SLOT(socketError(QAbstractSocket::SocketError)));
// Create the actual web client = worker
WebClient client(m_socket, m_configuration, m_pEventConnection, m_pThumbnailStreams, m_server, m_macAddress, 0 );
// Thread event loop
exec();
m_pLog->LOG(L_INFO, "Webclient thread finished");
}
//
// Client disconnect
//
void WebClientThread::disconnected()
{
m_socket->deleteLater();
exit(0);
}
This code works, but we experienced application crashes when it was executed while the NTP connection of our device kicked in and the system time was corrected from the epoch 01/01/1970 to the current time.
The crash could also be reproduced when running the application and meanwhile changing the system time from a script.
The application runs fine - even when the system time changes on the fly like this:
void WebClientThread::run()
{
// Configure the web client socket
m_socket = new QTcpSocket();
connect(m_socket, SIGNAL(disconnected()), this, SLOT(disconnected()));
connect(m_socket, SIGNAL (error(QAbstractSocket::SocketError)), this, SLOT(socketError(QAbstractSocket::SocketError)));
// Create the actual web client = worker
WebClient client(m_socket, m_configuration, m_pEventConnection, m_pThumbnailStreams, m_server, m_macAddress, 0 );
// Make this thread a loop,
exec();
delete m_socket;
m_pLog->LOG(L_INFO, "Webclient thread finished");
}
//=======================================================================
//
// Client disconnect
//
void WebClientThread::disconnected()
{
exit(0);
}
Why would deleteLater() crash the application when the system time is shifted ?
Additional information:
OS = embedded linux 3.0.0. Qt = 4.8
The socket is a connection between our Qt web server application and the front end server = lighttpd. Could it be that lighttpd closes the socket when the system time shifts 47 years and the request is still being processed by our web server?
I could reproduce it by sending requests to the server while in parallel running a script that sets date to 1980, 1990 and 2000. It changes once a second.
This smells of wrong use of Qt threads. I suggest you do not subclass QThread, if you call exec() from its run() method, because it's just too easy to do things wrong way if you do that.
See for example https://wiki.qt.io/QThreads_general_usage to see how to set up a worker QObject for a QThread, but the gist of it is, create subclass of QObject and put your code there. Then move an instance of that to a QThread instance, and connect signals and slots to make things happen.
Another things, you normally shouldn't use threads for Qt Networking, like QTcpSocket. Qt is event based and asynchronous, and as long as you just use signals and slots and never block in your slot methods, there is no need for threads, they only complicate things for no benefit. Only if you have time-consuming calculations, or if your program truly needs to utilize multiple CPU cores to achieve good enough performance, only then look into multithreading.

NodeJS. Child_process.spawn. Handle process' input prompt

I'm currently working on my web interface for git. Accessing git itself by child_process.spawn. Everything is fine while there is simple "command -> response" mechanism, but I cannot understand what should I do with command prompts (git fetch asks for password for example). Hypothetically there is some event fired, but I don't know what to listen to. All I see is "git_user#myserver's password: _" in command line where node.js process itself is running.
It would be great to redirect this request into my web application, but is it even possible?
I've tried to listen on message, data, pipe, end, close, readable at all streams (stdout, stdin, stderr), but no one fires on password prompt.
Here is my working solution (without mentioned experiments):
var out="";
var err="";
var proc=spawn(exe,cmd);
proc.on("exit",function(exitCode){
});
proc.stdout.on("data",function(data){
out+=data;
});
proc.stderr.on("data",function(data){
err+=data;
});
proc.on("close",function(code){
if(!code)func(out);
else return errHandler(err);
});
Can you please help me with my investigations?
UPDATE
Current situation: on my GIT web interface there is a button "FETCH" (as an example, for simple "git fetch"). When I press it, http request is generated and being sent to node.js server created by http.createServer(callback).listen(8080). callback function receives my request and creates child_process.spawn('git',['-C','path/to/local/repo','fetch']). All this time I see only loading screen on my web interface, but if I switch to command line window where node script is running I will see a password prompt. Now let's pretend that I can't switch window to console, because I work remotely.
I want to see password prompt on my web interface. It would be very easy to achieve if, for instance, child_process would emit some event on child.stdin (or somewhere else) when prompting for user input. In that case I would send string "Come on, dude, git wants to know your password! Enter it here: _______" back to web client (by response.end(str)), and will keep on waiting for the next http connection with client response, containing desired password. Then simply child.stdin.write(pass) it to git process.
Is this solution possible? Or something NOT involving command line with parent process.
UPDATE2
Just tried to attach listeners to all possible events described in official documentation: stdout and stderr (readable, data, end, close, error), stdin (drain, finish, pipe, unpipe, error), child (message, exit, close, disconnect, message).
Tried the same listeners on process.stdout, process.stderr after piping git streams to it.
Nothing fires on password request...
The main reason why your code wont work is because you only find out what happened with your Git process after is what executed.
The major reason to use spawn is beacause the spawned process can be configured, and stdout and stderr are Readable streams in the parent process.
I just tried this code out and it worked pretty good. Here is an example of spawning a process to perform a git push. However, as you may know git will ask you for username and password.
var spawn = require('child_process').spawn;
var git = spawn('git', ['push', 'origin', 'master']);
git.stderr.on('data', function(data) {
// do something with it
});
git.stderr.pipe(process.stderr);
git.stdout.pipe(process.stdout);
Make a local git repo and setup things so that you can do the above push command. However, you can really do any git command.
Copy this into a file called git_process.js.
Run with node git_process.js
Don't know if this would help but I found the only way to intercept the prompts from child processes was to set the detached option to true when you spawn a new child process.
Like you I couldn't find any info on prompts from child process in node on the interwebs. One would suspect it should go to stdout and then you would have to write to stdin. If I remember correctly you may find the prompt being sent to stderr.
Its a bit amazing to me that others haven't had this problem. Maybe we just doing it wrong.

Node Child Processes -- Message Listener

So I've created a service file that fires off a message listener. This service file has some logic in it to make sure that a worker process is fired...
Basically:
_parentProcess = function() { // logic to determine if process is parent) }
if (!_parentProcess()) {
_createParent();
} else {
_executeWorker();
}
_createParent will fork off the service.js file with a flag so that the next time the process runs, we are in a child/worker process.
The Worker process is what fires off my listener, now the problem I'm trying to wrap my head around is whether or not this is enough resource management? The listener gets a message that tells it to fire off some app. This app may take anywhere between 10 seconds to complete and 120 seconds to complete.
If it crashes, obviously, the service.js file handles that and just spins up another one but I'm more worried about blocking and using the most of my machine. Should I fork again in the listener the actual applications that I'm going to fire off or is this enough?

http listeners inside threads

I am writing a web service which has to be able to reply to multiple http requests.
From what I understand, I will need to deal with HttpListener.
What is the best method to receive a http request(or better, multiple http requests), translate it and send the results back to the caller? How safe is to use HttpListeners on threads?
Thanks
You typically set up a main thread that accepts connections and passes the request to be handled by either a new thread or a free thread in a thread pool. I'd say you're on the right track though.
You're looking for something similar to:
while (boolProcessRequests)
{
HttpListenerContext context = null;
// this line blocks until a new request arrives
context = listener.GetContext();
Thread T = new Thread((new YourRequestProcessorClass(context)).ExecuteRequest);
T.Start();
}
Edit Detailed Description If you don't have access to a web-server and need to roll your own web-service, you would use the following structure:
One main thread that accepts connections/requests and as soon as they arrive, it passes the connection to a free threat to process. Sort of like the Hostess at a restaurant that passes you to a Waiter/Waitress who will process your request.
In this case, the Hostess (main thread) has a loop:
- Wait at the door for new arrivals
- Find a free table and seat the patrons there and call the waiter to process the request.
- Go back to the door and wait.
In the code above, the requests are packaged inside the HttpListernContext object. Once they arrive, the main thread creates a new thread and a new RequestProcessor class that is initialized with the request data (context). The RequsetProcessor then uses the Response object inside the context object to respond to the request. Obviously you need to create the YourRequestProcessorClass and function like ExecuteRequest to be run by the thread.
I'm not sure what platform you're on, but you can see a .Net example for threading here and for httplistener here.

Resources