Gearman-manager: Speed decreases when putty closed - gearman

SOLUTION:
The solution that I found: using low level nohup program that ignores signal sent by putty when closing the connection.
So, instead of ./gearman-manager start I did nohup ./gearman-manager start
NOTE: Still, I would like to know why was it slowing down when closing putty OR why does it continues in the first place if it has received the hangup signal???
I have a problem with execution of a gearman worker after I close a putty session.
This is what I have:
gearman client that is started with a cron job checking something in DB (infinite loop).
gearman manager started with gearman-manager start command receiving client's tasks and managing the calls to a worker
gearman worker reading/writing from DB and echoing the status of a current job
When I start gearman-manager I can see the echos from my worker when it receives task and when it executes them. Tasks (updates in DB) are executed cca. 1/second...
A) When I close putty session the speed of changes in DB decreases enormously (cca. 1/10sec)?! Could you tell me why is this?
B) When I log back with putty I don't get the outputs of gearman-manager back to the screen? I expected I'll log back into and see that it continues to echo the status like it did before closing putty? Maybe this could be because gearman-manager is started with owner root while the echoes are coming from .php ran as user gearman? or maybe when I log back into it the process is in the background?!

You don't see the output when you create a new tty because the process was bound to the previous tty. Unless you use something like screen to keep the tty alive, you aren't going to see that output with a new terminal.

Related

Nodejs stuck on processing whenever the app is restarted

I have a nodejs application running on Linux, as we all know, whenever I restart the nodejs app it will get a new PID, suppose while the nodejs app is running, a client connects to it and running some process and the process status is processing, during that point of time, if the nodejs app restarts(on the server-side), how can we make sure the client connects back to the previous processing state.
What is happening now is, whenever the server restarts, the process stucks in processing forever.
Just direct me to a sample of how this scenario is handled in real life.
Thank You.
If I'm understanding you correctly, then the answer is you can't...
The reason for this is that, when you restart the process the event loop is restarted, meaning any processes that were running or were waiting in the event loop are gone. You are essentially clearing out the event loop when you restart.
I would say though, if you know the process is 'crashing' node then you probably want to look into that process and see why is crashing, place it in a try catch to it wont kill the server.
now with that said ( and without knowing what, processing state really means ) you could set a flag in your DB server for say 'job1' and have a status column of say 'running' when it was kicked off. When the node server restarts it can read Job status for 'running' jobs, if the 'job' is in a 'running' state you can fire off the job again and once complete update the table to 'completed'
This probably not the most efficient way as it's much better to figure out why the process if crashing, but as a fall-back this could work although in a clustered environment this could cause issues because server 1 may fail while server 2 is processing because server 1 does not know what server two is doing. With more details as to the use case, environment etc would probably allow for a better answer

node.js shutdown / restart observer

We have a number of tasks in a static queue on our server. When the server shuts down (or restarts) we'd prefer not to lose these tasks and therefore we will stash them in a DB structure. On boot this DB structure will be dumped back into the static queue and processing of these queued tasks will continue.
How is it possible to detect a shut down, halt that shutdown, and then continue the shutdown once the above DB storage function has been executed? from what context should this shutdown observation be made?
I'm not sure I understood your question, but if I got it right you want to run some code before your scripts exits to do some kind of cleanup.
You can use process.on(event, handler) to register an exit handler for your script for various events, including exit (the scripts exits), SIGINT (the user Ctrl + Cs the script) and uncaughtException (an exception thrown is not caught). Take a look at this answer.

Why does my node.js application occasionally hang when I don't have the terminal open?

I have a nodejs application that I run like this, over SSH:
$ tmux
$ node server.js
This starts my node application in a tmux session.
Obviously, I don't have the SSH session open all the time.
What I've been finding is that occasionally my application can get in a state where it won't server up any pages. This might be related to the application itself, or perhaps just a poorly disconnected SSH session.
Either way, simply logging into SSH, running:
$ tmux attach
And giving focus to the pane makes everything responsive again.
I thought the entire point of node.js was that everything is non-blocking - then what's going on here?
When a pane is is copy mode, tmux does not read from its tty. If some program running “in” in the tty continues to generate output, then the OS’s tty buffer will eventually fill and cause the writing process/thread to block. I do not know the internals of Node.js, but it may not expect writes to stdout/stderr to block: the console functions do not seem to have callbacks, so they may actually be blocking.
So, Node.js could very well end up blocked if the pane in which it was running was left in copy mode when your SSH connection was dropped.
If you need to assure non-blocking logging, then you might want to redirect (or tee) your stdout and stderr to a file and use something like less to view the prior logs (avoiding tmux’s copy mode since it might cause blocking).
Maybe something like this:
# Redirect stdout/stderr to a file, running Node.js in the background.
# Start a "less +F" on the log so that we immediately have a "tail" running.
node app.js >>app.log 2>&1 & less +F app.log
Or
# This pane will act as a 'tail -f', but do not use copy-mode here.
# Instead, run e.g. 'less app.log' in another pane to review prior logs.
node app.js 2>&1 | tee -a app.log
Or, if you are using a logging library, it might have something that you can use to automatically write to files.

How to trace IIS worker process requests

I need to be able to monitor requests from IIS w3wp processes.
How can I see IIS worker process Requests?
To trace all requests currently executing in IIS worker processes
Open a command window and type logman startsession name–p "IIS:
Request Monitor" -ets and press ENTER.
Event Tracing for Windows prints to the screen details about the
trace session you just started, including the name of the session,
the file name where the trace data will be collected (session
name.etl by default), and whether or not the command was successful
Allow the trace session to run until you have reproduced the problem
or until your sites have processed enough requests to produce a
manageable data set
From the command prompt, type logman stopsession name-ets and press
ENTER.
I'm not as experienced on Windows vs Linux so Ravindra's answer seems interesting (is this just scheduling a particular event viewer style session or actually logging out deeper?).
As you particularly ask about 'IIS worker process Requests' you have two options.
GUI
Open inetmgr, go to the root server level, go to Worker Processes and double-click the worker process of your choice. A new screen will load and you will see anything that worker is currently processing.
Command-line
Rather than just give you a single command to copy and paste this article is a great starter - http://www.iis.net/learn/get-started/getting-started-with-iis/getting-started-with-appcmdexe
The particular command you want is under the section 'INSPECTING CURRENTLY EXECUTING REQUESTS'

Nutch crawl fails when run as a background process on linux

When I run the Nutch crawl as a background process on Ubuntu in local mode, the Fetcher aborts with hung threads. The message is something like:
WARN fetcher.Fetcher - Aborting with "X" hung threads.
I start off the script using nohup and & as I want to log off from the session and have the crawler still run on the server. Else, when the crawl finishes at a certain depth and when the crawldb is being updated, the SSH session times out. I've tried configuring "keep alive" messages without much help. The command is something like:
nohup ./bin/nutch crawl ....... &
Has anybody experienced this before? It seems to happen only when I use nohup or &.
The hung threads message is logged by Fetcher class when some requests seem to hang, despite all intentions.
In Fetcher.java, lines 926-930 ::
if ((System.currentTimeMillis() - lastRequestStart.get()) > timeout) {
if (LOG.isWarnEnabled()) {
LOG.warn("Aborting with "+activeThreads+" hung threads.");
}
return;
}
The timeout for requests is defined by mapred.task.timeout and default value is 10 mins. You might increase it.. not sure if it will be a 100% clean fix.
When I had observed this phenomenon, I added loggers in the code to find for which url the request hung more than 10 mins and concluded that for large files this issue was seen that too when the server was taking more time for data transfer.

Resources