Is it ok to have many console.log()? - node.js

I want my application to have a "debug" mode that will print to the console every thing happening but putting many console.log calls is kinda polluting my source code, so is it a good practice to have a debug mode like this?
For example:
function doSomething() {
// ...
console.log("Did something");
}
I really need this debug mode because my function are called on events and it's difficult to trace what's happening and there are many possible scenarios.

Two issues:
Does your logging somehow "pollute" your source code? Not if the logging helps your future self or someone else understand your code. (Of course, don't allow your logging to have side-effects: No console.log( variable++ )
Is console logging good for programs put into production? No, not really. You should consider adopting a logging package.
I like Winston. There are plenty of other good packages; hopefully fans of those will write their own answers.
It allows you to send your log entries to files, to your *nix machine's syslog subsystem or to Windows Events or whatever, and to other places. It timestamps them if you want, and identifies which program they came from. Your console.log('current value', q) operation becomes logger.info('current value', q) and your console.error() becomes logger.error().
For a long-lived program (one that will still be used a few months from now) it is definitely worth your trouble to climb up the logger learning curve and rig up a solid logging system. If your program will run as part of a larger system, ask somebody how other parts of the system handle logging, and use the same scheme.

Related

Node.js optimizing module for best performance

I'm writing a crawler module which is calling it-self recursively to download more and more links depending on a depth option parameter passed.
Besides that, I'm doing more tasks on the returned resources I've downloaded (enrich/change it depending on the configuration passed to the crawler). This process is going on recursively until it's done which might take a-lot of time (or not) depending on the configurations used.
I wish to optimize it to be as fast as possible and not to hinder on any Node.js application that will use it.I've set up an express server that one of its routes launch the crawler for a user defined (query string) host. After launching a few crawling sessions for different hosts, I've noticed that I can sometimes get real slow responses from other routes that only return simple text.The delay can be anywhere from a few milliseconds to something like 30 seconds, and it's seems to be happening at random times (well nothing is random but I can't pinpoint the cause).I've read an article of Jetbrains about CPU profiling using V8 profiler functionality that is integrated with Webstorm, but unfortunately it only shows on how to collect the information and how to view it, but it doesn't give me any hints on how to find such problems, so I'm pretty much stuck here.
Could anyone help me with this matter and guide me, any tips on what could hinder the express server that my crawler might do (A lot of recursive calls), or maybe how to find those hotspots I'm looking for and optimize them?
It's hard to say anything more specific on how to optimize code that is not shown, but I can give some advice that is relevant to the described situation.
One thing that comes to mind is that you may be running some blocking code. Never use deep recursion without using setTimeout or process.nextTick to break it up and give the event loop a chance to run once in a while.

What's stopping my command from terminating?

I'm writing a command line tool for installing Windows services using Node JS. After running a bunch of async operations, my tool should print a success message then quit. Sometimes however, it prints its success message and doesn't quit.
Is there a way to view what is queued on Node's internal event loop, so I can see what is preventing my tool from quitting?
The most typical culprit for me in CLI apps is event listeners that are keeping the process alive. I obviously can't say if that's relevant to you without seeing your code, though.
To answer your more general question, I don't believe there are any direct ways to view all outstanding tasks in the event loop (at least not from JS-land). You can, however, get pretty close with process._getActiveHandles() and process._getActiveRequests().
I really recommend you look up the documentation for them, though. Because you won't find any. They're undocumented. And they start with underscores. Use at your own peril. :)
try to use some tools to clarify the workflow - for example, the https://github.com/caolan/async#waterfall or https://github.com/caolan/async#eachseriesarr-iterator-callback
so, you don't lose the callback called and can catch any erros thrown while executing commands.
I think you also need to provide some code samples that leads to this errors.

How to see what started a thread in Xcode?

I have been asked to debug, and improve, a complex multithreaded app, written by someone I don't have access to, that uses concurrent queues (both GCD and NSOperationQueue). I don't have access to a plan of the multithreaded architecture, that's to say a high-level design document of what is supposed to happen when. I need to create such a plan in order to understand how the app works and what it's doing.
When running the code and debugging, I can see in Xcode's Debug Navigator the various threads that are running. Is there a way of identifying where in the source-code a particular thread was spawned? And is there a way of determining to which NSOperationQueue an NSOperation belongs?
For example, I can see in the Debug Navigator (or by using LLDB's "thread backtrace" command) a thread's stacktrace, but the 'earliest' user code I can view is the overridden (NSOperation*) start method - stepping back earlier in the stack than that just shows the assembly instructions for the framework that invokes that method (e.g. __block_global_6, _dispatch_call_block_and_release and so on).
I've investigated and sought various debugging methods but without success. The nearest I got was the idea of method swizzling, but I don't think that's going to work for, say, queued NSOperation threads. Forgive my vagueness please: I'm aware that having looked as hard as I have, I'm probably asking the wrong question, and probably therefore haven't formed the question quite clearly in my own mind, but I'm asking the community for help!
Thanks
The best I can think of is to put breakpoints on dispatch_async, -[NSOperation init], -[NSOperationQueue addOperation:] and so on. You could configure those breakpoints to log their stacktrace, possibly some other info (like the block's address for dispatch_async, or the address of the queue and operation for addOperation:), and then continue running. You could then look though the logs when you're curious where a particular block came from and see what was invoked and from where. (It would still take some detective work.)
You could also accomplish something similar with dtrace if the breakpoints method is too slow.

How can I handle an access violation in Visual Studio C++?

Usually an access violation terminates the program and I cannot catch a Win32 exception using try and catch. Is there a way I can keep my program running, even in case of an access violation? Preferably I would like to handle the exception and show to the user an access violation occurred.
EDIT: I want my program to be really robust, even against programming errors. The thing I really want to avoid is a program termination even at the cost of some corrupted state.
In Windows, this is called Structured Exception Handling (SEH). For details, see here:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms680657%28v=vs.85%29.aspx
In effect, you can register to get a callback when an exception happens. You can't do this to every exception for obvious reasons.
Using SEH, you can detect a lot of exceptions, access violations included, but not all (e.g. double stack fault). Even with the exceptions that are detectable, there is no way to ensure 100% stability after the exception. However, it may be enough to inform the user, log the error, send a message back to the server, and gracefully exit.
I will start by saying that your question contains a contradiction:
EDIT: I want my program to be really robust, ... The thing I really want to avoid is a program termination even at the cost of some corrupted state.
A program that keeps on limpin' in case of corrupted state isn't robust, it's a liability.
Second, an opinion of sorts. Regarding:
EDIT: I want my program to be really robust, even against programming errors. ...
When, by programming errors you mean all bugs, then this is impossible.
If by programming errors you mean: "programmer misused some API and I want error messages instead of a crash, then write all code with double checks built in: For example, always check all pointers for NULL before usage, even if "they cannot be NULL if the programmer didn't make a mistake", etc. (Oh, you might also consider not using C++ ;-)
But IMHO, some amount of program-crashing-no-matter-what bugs will have to be accepted in any C++ application. (Unless it's trivial or you test the hell out of it for military or medical use (even then ...).)
Others already mentioned SEH -- it's a "simple" matter of __try / __catch.
Maybe instead of trying to catch bugs inside the program, you could try to become friends with Windows Error Reporting (WER) -- I never pulled this, but as far as I understand, you can completely customize it via the OutOfProcessException... callback functions.

Daemon that read barcodes and sends them over HTTP to a php script

I am looking forward to build a prototype, which should be running completely headless and without user interaction, system should be able to start a barcode reader, send it over the internet to a php as file.php?code=var ...
Which is the simplest way to do this?
I am thinking off:
windows-console app, some sort of
ping...
linux-console app, some sort of wget
or stuff like that
Does anyone have a better approach.
System should be completely autonomous, plug it in, scan barcode, send code, repeat...
Your job can be divided into three main tasks:
Get the Bar-Code value (not just a nice picture of it)
Transfer the Code-Value to a remote PHP-Application
process the data on the target. Out of scope.
There are lot of different possible ways to achieve this. But keep always in mind, that you don't have to re-invent the wheel:
Bar-Code readers usually behave like an ordinary keyboard. A scanned code will be treated as an ordinary user input from the keyboard. A good Bar-code Reader can be configured to finalize its input with an enter-key (\r).
As already mentioned: Use the programming language you have good in command or you're curious about. You mentioned that the target system will be a PHP-Script, so the transfer console application could also be realized with PHP .
do while(true) {
// wait for bar-code reader input
$code = readline();
// transfer code to
transfer($code);
}
See PHP-Manual Readline-functiond for more details.

Resources