How to completely erase the console output in windows? - node.js

I'm trying to erase the windows console programmatically from a NodeJS script. Not just slide the console output out of view...I want to actually clear it.
I'm writing a tool similar to TypeScript's tsc command, where it will watch a folder and incrementally compile the project. As such, on every file change, I rerun the compiler, and output any errors that are found (one line each). I would like to totally erase the console output so that users are not confused by old error messages as they scroll up the console.
When you run tsc --watch in a directory, TypeScript does exactly what I want. tsc actually erases the entire console output.
I've tried all of the following things:
process.stdout.write("\x1Bc");
process.stdout.write('\033c')
var clear = require('cli-clear'); clear();
I tried all of the escape codes from this post.
process.stdout.write("\u001b[2J\u001b[0;0H");
All of these either:
Printed an unknown char to the console
Slid the console down, equivalent to cls, which is NOT what I want.
How do I actually clear the screen and remove ALL of the output? I'm open to using a node module, piping outupt, spawning new cmds, hacks, etc, as long as it gets the job done.
Here's a sample node.js script to test out the issue.
for (var i = 0; i < 15; i++) {
console.log(i + ' --- ' + i);
}
//clear the console output here somehow

Adapted from a previous answer. You will need a C compiler (tested with mingw/gcc)
#include <windows.h>
int main(void){
HANDLE hStdout;
CONSOLE_SCREEN_BUFFER_INFO csbiInfo;
COORD destinationPoint;
SMALL_RECT sourceArea;
CHAR_INFO Fill;
// Get console handle
hStdout = CreateFile( "CONOUT$", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_WRITE, 0, OPEN_EXISTING, 0, 0 );
// Retrieve console information
if (GetConsoleScreenBufferInfo(hStdout, &csbiInfo)) {
// Select all the console buffer as source
sourceArea.Top = 0;
sourceArea.Left = 0;
sourceArea.Bottom = csbiInfo.dwSize.Y - 1;
sourceArea.Right = csbiInfo.dwSize.X - 1;
// Select a place out of the console to move the buffer
destinationPoint.X = 0;
destinationPoint.Y = 0 - csbiInfo.dwSize.Y;
// Configure fill character and attributes
Fill.Char.AsciiChar = ' ';
Fill.Attributes = csbiInfo.wAttributes;
// Move all the information out of the console buffer and init the buffer
ScrollConsoleScreenBuffer( hStdout, &sourceArea, NULL, destinationPoint, &Fill);
// Position the cursor
destinationPoint.X = 0;
destinationPoint.Y = 0;
SetConsoleCursorPosition( hStdout, destinationPoint );
}
return 0;
}
Compiled as clearConsole.exe (or whatever you want), it can be used from node as
const { spawn } = require('child_process');
spawn('clearConsole.exe');

Related

Can someone trace the reason for segmentation fault?

public class Watcher: Object
{
private int _fd;
private uint _watch;
private IOChannel _channel;
private uint8[] _buffer;
private int BUFFER_LENGTH;
public Watcher(string path, Linux.InotifyMaskFlags mask){
_buffer = new uint8[BUFFER_LENGTH];
//➔ Initialize notify subsystem
_fd = Linux.inotify_init();
if(_fd < 0){
error(#"Failed to initialize the notify subsystem: $(strerror(errno))");
}
//➔ actually adding abstraction to linux file descriptor
_channel = new IOChannel.unix_new(_fd);
//➔ watch the channel for given condition
//➔ IOCondition.IN => When the channel is ready for reading , IOCondition.HUP=>Hangup(Error)
_watch = _channel.add_watch(IOCondition.IN | IOCondition.HUP, onNotified);
if(_watch < 0){
error(#"Failed to add watch to channel");
}
//➔ Tell linux kernel to watch for any mask(for ex; access, modify) on a given filepath
var ok = Linux.inotify_add_watch(_fd, path, mask);
if(ok < 0){
error(#"Failed to add watch to path -- $path : $(strerror(errno))");
}
print(#"Watching for $(mask) on $path");
}
protected bool onNotified(IOChannel src, IOCondition condition)
{
if( (condition & IOCondition.HUP) == IOCondition.HUP){
error(#"Received hang up from inotify, can't get update");
}
if( (condition & IOCondition.IN) == IOCondition.IN){
var bytesRead = Posix.read(_fd, _buffer, BUFFER_LENGTH);
Linux.InotifyEvent *pevent = (Linux.InotifyEvent*) _buffer;
handleEvent(*pevent);
}
return true;
}
protected void handleEvent(Linux.InotifyEvent ev){
print("Access Detected!\n");
Posix.exit(0);
}
~Watcher(){
if(_watch != 0){
Source.remove(_watch);
}
if(_fd != -1){
Posix.close(_fd);
}
}
}
int main(string[] args) requires (args.length > 1)
{
var watcher = new Watcher(args[1], Linux.InotifyMaskFlags.ACCESS);
var loop = new MainLoop();
loop.run();
return 0;
}
The above code can be found on "Introducing Vala Programming - Michael Lauer"
Proof of failure:
Image displaying failure on access to the file being watched for access
Terminal 1:
./inotifyWatcher
Terminal 2:
cat
As soon as I access the file, segmentation fault occurs.
I have also tried using gdb for the cause of failure, but it's mostly cryptic for me. I am using parrot(debian/64-bit) on my machine. Also, I am new to this(stackoverflow, linux kernel program).
Vala source line numbers can be included in the binary when compiling with the --debug switch. The line numbers appear in the .debug_line DWARF section of an ELF binary:
valac --debug --pkg linux inotifyWatcher.vala
Run the binary using gdb in the first terminal:
gdb --args ./inotifyWatcher .
(gdb) run
The dot specifies to watch the current directory. Then when the current directory is access with a command like ls the watching program segmentation faults. The output from GDB is:
Program received signal SIGSEGV, Segmentation fault.
0x0000000000401a86 in watcher_onNotified (self=0x412830, src=0x40e6e0, condition=G_IO_IN) at inotifyWatcher.vala:51
51 handleEvent(*pevent);
GDB includes the line number, 51, from the source file and shows the line.
So the problem is to do with reading from the file descriptor then passing the buffer to handleEvent. You probably want to check bytesRead is greater than zero and I'm not sure about the use of pointers in this example. Explicit pointers like that should rarely need to be used in Vala, it may require a change to the binding, e.g. using ref to modify the way the argument is passed.

Making ESLint work for files which are loaded by others

I have a set of scripts in which one loads another script file and uses the functions defined in it.
For example, let's say I have main.js script with the following source
load("helper.js");
var stdin = new java.io.BufferedReader( new java.io.InputStreamReader(java.lang.System['in']) );
function readline() {
var line = stdin.readLine();
return line;
}
var N = parseInt(readline());
for(var i = 0; i< N; i++)
{
print("fd630b881935b5d43180ff301525488a");
var num = parseInt(readline());
var ans = perfectNumberCheck(num);
print(ans);
print("dc29e6fa38016b00627b6e52956f3c64");
}
I have another script file, helper.js which has the following source
function perfectNumberCheck(num) {
if(num == 1)
{
return 0;
}
var halfNum = (num/2) + 1;
var sum = 0;
var retVal = 0;
for(var i=1 ; i < halfNum; i++){
if(num % i === 0){
sum = sum + i;
}
}
if(sum == num){
retVal = 1;
}
else {
retVal = 0;
}
return retVal;
}
As it can be seen, main.js uses the function perfectNumberCheck. Now, when I run ESLint on both the files using eslint main.js helper.js or by using eslint *.js, I get the no-unused-vars error 'perfectNumberCheck' is defined but never used even though it is being used in main.js.
I want to keep this error in the configuration but don't want ESLint to show it in such cases.
Is there a way to make ESLint resolve these dependencies without writing the entire code in a single script file?
You can add a comment like /* exported perfectNumberCheck */ to helper.js to tell no-unused-vars that the function is used elsewhere outside that file:
/* exported perfectNumberCheck */
function perfectNumberCheck() {
// ...
}
By itself, ESLint lints each file in isolation, so it will not resolve identifiers defined in other files. The /* exported ... */ comment and globals allow you to inform ESLint of dependencies in other files.
Since exported comments are intended to be used to indicate names that are used elsewhere by being present in the global scope, they have no effect when the file's source in not in the global scope. Specifically, quoting the docs:
Note that /* exported */ has no effect for any of the following:
when the environment is node or commonjs
when parserOptions.sourceType is module
when ecmaFeatures.globalReturn is true

Is 7zip stdout broken? Is there a way to capture the progress in nodejs? [Windows]

I am trying to get the stdout of 7zip when it processes files and get the percentage in nodeJs, but it doesn't behave as expected. 7zip doesn't output anything to stdout until the very end of the execution. Which is not very helpful.. especially when I have large files being compressed and no feedback is shown for a very long time.
The code I am using (simplified):
// 7zip test, place the 7z.exe in the same dir, if it's not on %PATH%
var cp = require('child_process');
var inputFile = process.argv[2]; if(inputFile==null) return;
var regProgress = /(\d{1,3})%\s*$/; //get the last percentage of the string, 3 digits
var proc = cp.spawn("7z.exe",["a","-t7z" ,"-y" ,inputFile + ".7z",inputFile]);
proc.stdout.setEncoding("utf8");
proc.stdout.on("data",function(data){
if(regProgress.test(data))
console.log("Progress = " + regProgress.exec(data)[1] + "%");
});
proc.once("exit",function(exit,sig){ console.log("Complete"); });
I have used the same code to get the percentage with WinRar successfully and I am beginning to think that 7zip might be buggy? Or I am doing it wrong? Can I forcefully read the stdout of a process with a timer perhaps?
The same code above, with the exception of the following line replaced, works as expected with WinRar.
var proc = cp.spawn("Rar.exe",["a","-s","-ma5","-o+",inputFile+".rar",inputFile]);
If anyone knows why this happens and if it is fixable, I would be grateful! :-)
p.s. I have tried 7za.exe, the command line version of 7zip, also the stable, beta and alpha versions, they all have the same issue
It is no longer needed to use a terminal emulator like pty.js, you can pass the -bsp1 to 7z to force to output the progress to stdout.
7-zip only outputs progress when stdout is a terminal.
To trick 7-zip, you need to npm install pty.js (requires Visual Studio or VS Express with Windows SDK) and then use code like:
var pty = require('pty');
var inputFile = process.argv[2],
pathTo7zip = 'c:\\Program Files\\7-Zip\\7z.exe';
if (inputFile == null)
return;
var term = pty.spawn(process.env.ComSpec, [], {
name: 'ansi',
cols: 200,
rows: 30,
cwd: process.env.HOME,
env: process.env
});
var rePrg = /(\d{1,3})%\r\n?/g,
reEsc = /\u001b\[\w{2}/g,
reCwd = new RegExp('^' + process.cwd().replace(/\\/g, '\\\\'), 'm');
prompts = 0,
buffer = '';
term.on('data', function(data) {
var m, idx;
buffer += data;
// remove terminal escape sequences
buffer = buffer.replace(reEsc, '');
// check for multiple progress indicators in the current buffer
while (m = rePrg.exec(buffer)) {
idx = m.index + m[0].length;
console.log(m[1] + ' percent done!');
}
// check for the cmd.exe prompt
if (m = reCwd.exec(buffer)) {
if (++prompts === 2) {
// command is done
return term.kill();
} else {
// first prompt is before we started the actual 7-zip process
if (idx === undefined) {
// we didn't see a progress indicator, so make sure to truncate the
// prompt from our buffer so that we don't accidentally detect the same
// prompt twice
buffer = buffer.substring(m.index + m[0].length);
return;
}
}
}
// truncate the part of our buffer that we're done processing
if (idx !== undefined)
buffer = buffer.substring(idx);
});
term.write('"'
+ pathTo7zip
+ '" a -t7z -y "'
+ inputFile
+ '.7z" "'
+ inputFile
+ '"\r');
It should be noted that 7-zip does not always output 100% at finish. If the file compresses quickly, you may just see only a single 57% for example, so you will have to handle that however you want.

How to launch editor like vi or emacs from node.js REPL?

I want to launch an editor like vi or emacs from node.js REPL.
I have tried two approaches till now:
Node Addons
Heres how my editor.cc looks like:
const char *tempFile = "TEMP_FILE"; // File to be opened with the editor
Handle<Value> launchEditor (const Arguments& args) {
const char *editor = "vi";
Local<String> buffer;
pid_t pid = fork();
if (pid == 0) {
execlp(editor, editor, tempFile, NULL);
// Exit with "command-not-found" if above fails.
exit(127);
} else {
waitpid(pid, 0, 0);
char *fileContent = readTempFile(); // Simple file IO code to read file.
buffer = String::New(fileContent);
free(fileContent);
}
return buffer;
}
// MAKE IT A NODE MODULE
void Init(Handle<Object> target) {
target->Set(String::NewSymbol("editor"), FunctionTemplate::New(launchEditor)->GetFunction());
}
NODE_MODULE(editor, Init)
This worked when I had node v0.6.12 (compiled with node-waf), but
when I updated node to v0.8.1, this code stopped working (compiled
with node-gyp). The editor simply didn't appear, and the file
content was read and returned (with emacs) or the editor was running as a background
process (with vi)! Is there anything that I need to change for it to work with 0.8.1?
Even if the editor is launched as background process, can I bring it to foreground from the code itself?
Child_process module
spawn = require('child_process').spawn;
editor = spawn('emacs', ['TEMP_FILE']);
But this is not working properly. With emacs, it shows the error
input is not a tty and vi gives a weird interface.
Can someone help with any of the solutions above, or suggest some other working solution?
I just stumbled upon one about a week ago, you should give it a try: node-editor

Where do writes to stdout go when launched from a cygwin shell, no redirection

I have an application, let's call it myapp.exe, which is dual-mode console/GUI, built as /SUBSYSTEM:WINDOWS (There's a tiny 3KB shim myapp.com to cause cmd.exe to wait to display the new prompt.)
If I launch from a command prompt:
myapp -> cmd.exe runs myapp.com which runs myapp.exe. stdout is initially a detached console, by using AttachConsole and freopen("CONOUT$", "w", stdout) my output appears in the command box. OK
myapp.exe -> cmd.exe displays the prompt too early (known problem), otherwise same as previous. Not a normal usage scenario.
myapp > log -> stdout is a file, normal use of std::cout ends up in the file. OK
If I launch from Windows explorer:
myapp.com -> console is created, stdout is console, output goes into console. Same result as using /SUBSYSTEM:CONSOLE for the entire program, except that I've added a pause when myapp.com is the only process in the console. Not a normal usage scenario.
myapp.exe -> stdout is a NULL handle, I detect this and hook std::cout to a GUI. OK
If I launch from Matlab shell:
system('myapp') or system('myapp.com') or system('myapp.exe') -> For all three variations, stdout is piped to MatLab. OK
If I launch from a cygwin bash shell:
./myapp.com -> Just like launch from cmd.exe, the output appears in the command box. OK
./myapp -> (bash finds ./myapp.exe). This is the broken case. stdout is a non-NULL handle but output goes nowhere. This is the normal situation for running the program from bash and needs to be fixed!
./myapp > log -> Just like launch from cmd.exe with file redirection. OK
./myapp | cat -> Similar to file redirection, except output ends up on the console window. OK
Does anybody know what cygwin sets as stdout when launching a /SUBSYSTEM:WINDOWS process and how I can bind std::cout to it? Or at least tell me how to find out what kind of handle I'm getting back from GetStdHandle(STD_OUTPUT_HANDLE)?
My program is written with Visual C++ 2010, without /clr, in case that matters in any way. OS is Windows 7 64-bit.
EDIT: Additional information requested.
CYGWIN environment variable is empty (or non-existent).
GetFileType() returns FILE_TYPE_UNKNOWN. GetLastError() returns 6 (ERROR_INVALID_HANDLE). It doesn't matter whether I check before or after calling AttachConsole().
However, if I simply ignore the invalid handle and freopen("CONOUT$", "w", stdout) then everything works great. I was just missing a way to distinguish between (busted) console output and file redirection, and GetFileType() provided that.
EDIT: Final code:
bool is_console(HANDLE h)
{
if (!h) return false;
::AttachConsole(ATTACH_PARENT_PROCESS);
if (FILE_TYPE_UNKNOWN == ::GetFileType(h) && ERROR_INVALID_HANDLE == GetLastError()) {
/* workaround cygwin brokenness */
h = ::CreateFile(_T("CONOUT$"), GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL);
if (h) {
::CloseHandle(h);
return true;
}
}
CONSOLE_FONT_INFO cfi;
return ::GetCurrentConsoleFont(h, FALSE, &cfi) != 0;
}
bool init( void )
{
HANDLE out = ::GetStdHandle(STD_OUTPUT_HANDLE);
if (out) {
/* stdout exists, might be console, file, or pipe */
if (is_console(out)) {
#pragma warning(push)
#pragma warning(disable: 4996)
freopen("CONOUT$", "w", stdout);
#pragma warning(pop)
}
//std::stringstream msg;
//DWORD result = ::GetFileType(out);
//DWORD lasterror = ::GetLastError();
//msg << result << std::ends;
//::MessageBoxA(NULL, msg.str().c_str(), "GetFileType", MB_OK);
//if (result == FILE_TYPE_UNKNOWN) {
// msg.str(std::string());
// msg << lasterror << std::ends;
// ::MessageBoxA(NULL, msg.str().c_str(), "GetLastError", MB_OK);
//}
return true;
}
else {
/* no text-mode stdout, launch GUI (actual code removed) */
}
}
The GetFileType() function allows to distinguish between some types of handles, in particular consoles, pipes, files, and broken handles.

Resources