mozilla extension - reading stdout of external process - xpcom

I am trying to run dvipng process from a thunderbird extension using Components.interfaces.nsIProcess. I need to read standard output of the process, but I am not able to find a way to do that. I found some threads on nsIProcess2, but that one was (as it seems) never fully implemented with stdout. Any suggestions?

nsIProcess2 is unrelated - it was implemented but later folded into nsIProcess. It was only about starting processes asynchronously.
The relevant bugs are bug 484246 and bug 68702. The latter has been resolved but so far that code doesn't ship with Firefox/Thunderbird by default (it's quite a bit of code that neither Firefox nor Thunderbird need themselves). So your options are:
Build IPCModule yourself and make it part of your extension - not recommendable because it will cause lots of troubles.
Create a native library that will call dvipng for you, use it via js-ctypes - should be the easiest solution.
Turn dvipng into a library and use it directly via js-ctypes - probably not too hard either, this will also give you better performance.

Related

WebControl in a separate process stealing focus revisited

I need to process AJAX in my crawler and would prefer using system browser albeit I may have to change my mind. My crawler program may generally be working in background while the user can work on other stuff in other applications.
Anyhow - since WebControl leaks memory if processing JS libs that leak memory - this can cause a crawler to quickly run out of memory. (Many SO posts about this.)
So I have created a solution that uses a separate "dummy" small executable with the webcontrol that takes input/output. This is launched as a separate process by the crawler. This part seems to work great. This child process is created/destroyed as many times as needed.
However, this called process with the embedded-IE grabs focus on every page load (a least if e.g. JS code calls focus) which means if the user is doing work in e.g. Word or whatever - keyboard focus is lost.
I have already moved the embedded IE window off-screen, but I can not make it invisible in the traditional sense since then the embedded IE stops working.
I have tried to disable all parent controls before calling navigate - but it does not work for me.
Any ideas I have not tried? Maybe somehow catch a windows message that focuses webcontrol and ignore it? OR something so I can immediately refocus the earlier control that had focus?
I currently use Delphi - but this question is applicable to VB, C# .Net etc. from my earlier investigations on this matter. I will take a solution and ideas in any language.

nodejs fs.exists() and fs.existsAsync() are deprecated, but why can I still use it with Node v4 and v6

I am reading Nodejs documentation here https://nodejs.org/api/fs.html#fs_fs_exists_path_callback
And it says fs.exists() and fs.existsAsync() are deprecated.
So my intuition would be that it will throw an error if I am using a newer version of NodeJs.
However, using both NodeJs v4.3.2 and v6, I still see the fs.exists() working. why is it so? does that mean that If I migrate my system from NodeJS v0.10.0, I don't necessarily have to update my dependencies that is invoking such function, and it's backward compatible?
It means that the community behind node.js development are recommending against using this feature now because it has problems and they MAY get rid of it sometime in the future to force people to stop using it.
So my intuition would be that it will throw an error if I am using a newer version of NodeJs.
They have not yet made it throw an error.
However, using both NodeJs v4.3.2 and v6, I still see the fs.exists() working. why is it so?
And, from the title of your question:
but why can I still use it with Node v4 and v6?
Because though they are recommending against using it now, they have not yet removed it.
does that mean that If I migrate my system from NodeJS v0.10.0, I don't necessarily have to update my dependencies that is invoking such function, and it's backward compatible?
No. The community behind node.js is telling you that they are reserving the right to remove those two functions in any future version.
Bottom line: If you want compatibility with future versions, stop using both fs.exists() now.
fs.exists() is also inconsistent with other node.js async APIs in that the callback is has does not follow the typical calling convention of fn(err, data). It does not have the err parameter which makes it an oddball.
You may also want to understand the reason they are problematic to use. A modern operating system is a multi-tasking system and the file system is a shared resource among potentially many processes. That means that if you do:
if (fs.existsSync("somefile")) {
// execute some code when somefile does exist
} else {
// execute some code when somefile does not exist
}
Then, the state of whether somefile exists or not could change between the time you run the fs.existsSync() call and the time you execute the code that assumes it knows whether the file exists or not. That's called a "race condition" and it's considered very bad design because it creates the possibility for extremely hard-to-reproduce bugs that may hit just occasionally (probably the worst kind of bug to try to find).
Note this directly from the node.js doc for fs.exists():
Using fs.exists() to check for the existence of a file before calling
fs.open(), fs.readFile() or fs.writeFile() is not recommended. Doing
so introduces a race condition, since other processes may change the
file's state between the two calls. Instead, user code should
open/read/write the file directly and handle the error raised if the
file does not exist.
If you're using the asynchronous version fs.exists(), then the race condition is even worse because your own node.js code could even change the state of the file because your if/else logic runs.
Depending upon what you are generally trying to do, the non-race-condition substitute is to just attempt to open the file with some sort of exclusive access. If the file exists, you will successfully open it with no race condition. If the file did not exist, you will simply get an error and you can then handle that error. In some other cases, you just attempt to create the file with a mode that will fail if it already exists. Both of these situations use an atomic comparison inside the OS file system code so they do not have "race conditions".
You should fix your code now. If you still don't understand what the recommended fix is, then please post your code that is using fs.exists() and the surrounding code context and we can help you with a better design.
Stability level 0 (or deprecated) in Node.js means it can be removed at anytime, though not necessarily with the immediate next version.
Don't rely on them being backwards-compatible, or even to have similar behaviors across versions even if they are present.
Per the documentation
Stability: 0 - Deprecated This feature is known to be problematic, and changes are planned. Do not rely on it. Use of the feature may cause warnings. Backwards compatibility should not be expected.
In other words, it may drop out or otherwise completely stop working at any point without further notice. It may or may not work if/when you migrate.
For what it's worth, it's not atypical to have deprecated features remain for multiple major versions in other software as well. In the OSS world, I've seen deprecated features that lasted for as long as the project was maintained. Presumably because the maintainer/user base had some use for the deprecated feature because it was good enough for their use case (even if it wasn't as good as it should/could have been, and even when a newer API was developed).

What's stopping my command from terminating?

I'm writing a command line tool for installing Windows services using Node JS. After running a bunch of async operations, my tool should print a success message then quit. Sometimes however, it prints its success message and doesn't quit.
Is there a way to view what is queued on Node's internal event loop, so I can see what is preventing my tool from quitting?
The most typical culprit for me in CLI apps is event listeners that are keeping the process alive. I obviously can't say if that's relevant to you without seeing your code, though.
To answer your more general question, I don't believe there are any direct ways to view all outstanding tasks in the event loop (at least not from JS-land). You can, however, get pretty close with process._getActiveHandles() and process._getActiveRequests().
I really recommend you look up the documentation for them, though. Because you won't find any. They're undocumented. And they start with underscores. Use at your own peril. :)
try to use some tools to clarify the workflow - for example, the https://github.com/caolan/async#waterfall or https://github.com/caolan/async#eachseriesarr-iterator-callback
so, you don't lose the callback called and can catch any erros thrown while executing commands.
I think you also need to provide some code samples that leads to this errors.

How to detect that no one is writing to a file in Linux?

I am wondering, is there a simple way to tell whether another entity has a certain file open for writing? I don't have time to use iNotify continuously to wait for any current writer to finish writing. I need to do an intermittent check.
Thanks.
What exactly are you doing where you "don't have time to use iNotify continuously"? First, you should be using the IN_CLOSE_WRITE flag so that iNotify just make one notification when the file gets closed after being written. Using it continuously makes no sense. Second, if your timing is that critical, I'm thinking writing to a file isn't your ideal solution. Do you control the first writer? Do you have to worry about anything else writing to the file after the first writer closes it?
lsof LiSts Open Files. fuser also works similarly (File USER), by telling you which user is using the file.
See: http://www.refining-linux.org/archives/23/16-Introduction-to-lsof-and-fuser/
Since you seem to be wanting to use a library-style interface, and not system, see ofl-lib.c. (It's really just having removed everything but the main function from the ofl program itself.)
You can't do so easily in the general case, and even if you could, you cannot use the information in a non-racy manner (see caf's comment).
So I'd say, redesign your application so you do not need to know.

Automatically adjusting process priorities under Linux

I'm trying to write a program that automatically sets process priorities based on a configuration file (basically path - priority pairs).
I thought the best solution would be a kernel module that replaces the execve() system call. Too bad, the system call table isn't exported in kernel versions > 2.6.0, so it's not possible to replace system calls without really ugly hacks.
I do not want to do the following:
-Replace binaries with shell scripts, that start and renice the binaries.
-Patch/recompile my stock Ubuntu kernel
-Do ugly hacks like reading kernel executable memory and guessing the syscall table location
-Polling of running processes
I really want to be:
-Able to control the priority of any process based on it's executable path, and a configuration file. Rules apply to any user.
Does anyone of you have any ideas on how to complete this task?
If you've settled for a polling solution, most of the features you want to implement already exist in the Automatic Nice Daemon. You can configure nice levels for processes based on process name, user and group. It's even possible to adjust process priorities dynamically based on how much CPU time it has used so far.
Sometimes polling is a necessity, and even more optimal in the end -- believe it or not. It depends on a lot of variables.
If the polling overhead is low-enough, it far exceeds the added complexity, cost, and RISK of developing your own style kernel hooks to get notified of the changes you need. That said, when hooks or notification events are available, or can be easily injected, they should certainly be used if the situation calls.
This is classic programmer 'perfection' thinking. As engineers, we strive for perfection. This is the real world though and sometimes compromises must be made. Ironically, the more perfect solution may be the less efficient one in some cases.
I develop a similar 'process and process priority optimization automation' tool for Windows called Process Lasso (not an advertisement, its free). I had a similar choice to make and have a hybrid solution in place. Kernel mode hooks are available for certain process related events in Windows (creation and destruction), but they not only aren't exposed at user mode, but also aren't helpful at monitoring other process metrics. I don't think any OS is going to natively inform you of any change to any process metric. The overhead for that many different hooks might be much greater than simple polling.
Lastly, considering the HIGH frequency of process changes, it may be better to handle all changes at once (polling at interval) vs. notification events/hooks, which may have to be processed many more times per second.
You are RIGHT to stay away from scripts. Why? Because they are slow(er). Of course, the linux scheduler does a fairly good job at handling CPU bound threads by downgrading their priority and rewarding (upgrading) the priority of I/O bound threads -- so even in high loads a script should be responsive I guess.
There's another point of attack you might consider: replace the system's dynamic linker with a modified one which applies your logic. (See this paper for some nice examples of what's possible from the largely neglected art of linker hacking).
Where this approach will have problems is with purely statically linked binaries. I doubt there's much on a modern system which actually doesn't link something dynamically (things like busybox-static being the obvious exceptions, although you might regard the ability to get a minimal shell outside of your controls as a feature when it all goes horribly wrong), so this may not be a big deal. On the other hand, if the priority policies are intended to bring some order to an overloaded shared multi-user system then you might see smart users preparing static-linked versions of apps to avoid linker-imposed priorities.
Sure, just iterate through /proc/nnn/exe to get the pathname of the running image. Only use the ones with slashes, the others are kernel procs.
Check to see if you have already processed that one, otherwise look up the new priority in your configuration file and use renice(8) to tweak its priority.
If you want to do it as a kernel module then you could look into making your own binary loader. See the following kernel source files for examples:
$KERNEL_SOURCE/fs/binfmt_elf.c
$KERNEL_SOURCE/fs/binfmt_misc.c
$KERNEL_SOURCE/fs/binfmt_script.c
They can give you a first idea where to start.
You could just modify the ELF loader to check for an additional section in ELF files and when found use its content for changing scheduling priorities. You then would not even need to manage separate configuration files, but simply add a new section to every ELF executable you want to manage this way and you are done. See objcopy/objdump of the binutils tools for how to add new sections to ELF files.
Does anyone of you have any ideas on how to complete this task?
As an idea, consider using apparmor in complain-mode. That would log certain messages to syslog, which you could listen to.
If the processes in question are started by executing an executable file with a known path, you can use the inotify mechanism to watch for events on that file. Executing it will trigger an I_OPEN and an I_ACCESS event.
Unfortunately, this won't tell you which process caused the event to trigger, but you can then check which /proc/*/exe are a symlink to the executable file in question and renice the process id in question.
E.g. here is a crude implementation in Perl using Linux::Inotify2 (which, on Ubuntu, is provided by the liblinux-inotify2-perl package):
perl -MLinux::Inotify2 -e '
use warnings;
use strict;
my $x = shift(#ARGV);
my $w = new Linux::Inotify2;
$w->watch($x, IN_ACCESS, sub
{
for (glob("/proc/*/exe"))
{
if (-r $_ && readlink($_) eq $x && m#^/proc/(\d+)/#)
{
system(#ARGV, $1)
}
}
});
1 while $w->poll
' /bin/ls renice
You can of course save the Perl code to a file, say onexecuting, prepend a first line #!/usr/bin/env perl, make the file executable, put it on your $PATH, and from then on use onexecuting /bin/ls renice.
Then you can use this utility as a basis for implementing various policies for renicing executables. (or doing other things).

Resources