Node.JS write to stdout without a newline - node.js

I have the feeling that is probably not possible:
I am trying to print on the terminal text without a new line.
I have tried process.stdout.write and npm jetty but they all seem to automatically append a new line at the end.
Is it possible to write to stdout without having an automatic newline?
Just to be clear: I am not concerned about browsers, I am only interested in UNIX/Linux writing what in C/C++ would be the equivalent of:
std::cout << "blah";
printf("blah");

process.stdout.write() does not automatically add a new line. If you post precise details about why you think it does, we can probably tell you how you are getting confused, but while console.log() does add a newline, process.stdout.write() has no frills and will not write anything you don't explicitly pass to it.
Here's a shell session providing supporting evidence:
echo 'process.stdout.write("123")' > program.js
node program.js | wc -c
3

According to this link process.stdout.write():
console.log equivalent could look like this:
console.log = function(msg) {
process.stdout.write(`${msg}\n`);
};
So process.stdout.write should meet your request...

Related

How to write to stderr in Nim?

I know there is echo, which writes to stdout. Is it possible to redirect echo to stderr, or is there another way to write to stderr?
It is not possible to redirect echo, but the same can be achieved by using writeLine on the stderr handle (no special imports required):
stderr.writeLine("Error: ", 42)
Documentation links:
writeLine in streams module
stderr in system module
Another way is to call writeLine. It is listed among the system Exports. The docs for echo suggest also calling flushFile as below.
writeLine(stderr, "my err")
flushFile(stderr)
This was tested with Nim 1.4.2.

returning values in a bash function

I'm working with a growing bash script and within this script I have a number of functions. One of these functions is supposed to return a variables value, but I am running into some issues with the syntax. Below is an example of the code.
ShowTags() {
local tag=0
read tag
echo "$tag"
}
selected_tag=$(ShowTags)
echo "$selected_tag"
pulled this code from a Linux Journal article, but the problem is it doesn't seem to work, or perhaps it does and im missing something. Essentially whenever the function is called the script hangs up and does not output anything, I need to CTRL+C to drop back to CLI.
The article in question is below.
http://www.linuxjournal.com/content/return-values-bash-functions
So my question is this the proper way to return a value? Is there a better or more dependable way of doing this? And if there is please give me an example so I can figure this out without using global variables.
EDIT:
The behavior of this is really getting to me now. I am using the following script.
ShowTags() {
echo "hi"
local tag=0
read tag
echo "$tag"
}
selected_tag=$(ShowTags)
echo "$selected_tag
Basically what happens is bash will act as if the read command is taking place before the echo tag at the top of the function. As soon as I pass something to read though it will run the top echo, and complete the rest of the script. I am not sure why this is happening. This is exactly what is happening in my main script.
Change echo "hi" to echo "hi" >/dev/tty.
The reason you're not seeing it immediately is that $(ShowTags) captures all the standard output of the function, and that gets assigned to selected_tag. So you don't see any of it until you echo that variable.
By redirecting the prompt to /dev/tty, it's always displayed immediately on the terminal, not sent to the function's stdout, so it doesn't get captured by the command substitution.
You are trying to define a function with Name { ... ]. You have to use name() { ... }:
ShowTags() { # add ()
local tag=0
read tag
echo "$tag"
} # End with }
selected_tag=$(ShowTags)
echo "$selected_tag"
It now lets the user type in a string and have it written back:
$ bash myscript
hello world # <- my input
hello world # script's output
You can add a prompt with read -p "Enter tag: " tag to make it more obvious when to write your input.
As #thatotherguy pointed out, your function declaration syntax is off; but I suspect that's a transcription error, as if it was wrong in the script you'd get different problems. I think what's going on is that the read tag command in the function is trying to read a value from standard input (by default that's the terminal), and pausing until you type something in. I'm not sure what it's intended to do, but as written I'd expect it to pause indefinitely until something's typed in.
Solution: either type something in, or use something other than read. You could also add a prompt (read -p "Enter a tag: " tag) to make it more clear what's going on.
BTW, I have a couple of objections to the linux journal article you linked. These aren't relevant to your script, but things you should be aware of.
First, the function keyword is a nonstandard bashism, and I recommend against using it. myfunc() ... is sufficient to introduce a function definition.
Second, and more serious, the article recommends using eval in an unsafe way. Actually, it's really hard to use eval safely (see BashFAQ #48). You can improve it a great deal just by changing the quoting, and even more by not using eval at all:
eval $__resultvar="'$myresult'" # BAD, can evaluate parts of $myresult as executable code
eval $__resultvar='"$myresult"' # better, is only vulnerable to executing $__resultvar
declare $__resultvar="$myresult" # better still
See BashFAQ #6 for more options and discussion.

Use perl to send AT commands to modem

I have a embedded linux box with perl 5.10 and a GSM modem attached.
I have written a simple perl script to read/write AT commands through the modems device file (/dev/ttyACM0).
If i write a simle command like "ATZ\r" to the modem and wait for a response I receive very odd data like "\n\n\nATZ\n\n0\n\nOK\n\n\n\n\nATZ\n\n\n\n..." and the data keeps coming in. It almost seems like the response is garbled up with other data.
I would expect something like "ATZ\nOK\n" (if echo is enabled).
If i send the "ATZ" command manually with e.g. minicom everything works as expected.
This leads me to think it might be some kind of perl buffering issue, but that's only guessing.
I open the device in perl like this (I do not have Device::Serialport on my embedded linux perl installation):
open(FH, "+<", "/dev/ttyACM0") or die "Failed to open com port $comport";
and read the response one byte at a time with:
while(1) {
my $response;
read(FH, $response, 1);
printf("hex response '0x%02X'\n", ord $response);
}
Am I missing some initialization or something else to get this right?
Regards
Klaus
I don't think you need the while loop. This code should send the ATZ command, wait for the response, then print the response:
open(FH, "+>", "/dev/ttyACM0") or die "Failed to open com port $comport";
print FH ("ATZ\n");
$R = <FH>;
print $R;
close(FH);
It may be something to do with truncation. Try changing "+>" into "+<".
Or it may be something to do with buffering, try unbuffering output after your open():
select((select(FH), $| = 1)[0]);
Thanks for your answer. Although not the explicit answer to my question it certainly brought me on the right track.
As noted by mti2935 this was indeed not a perl problem, but a mere tty configuration problem.
Using the "stty" command with the following parameters set my serial port in the "expected" mode:
-opost: Disable all output postprocessing
-crnl: Do not translate CR to NL
-onlcr: Do not translate NL to CR NL
-echo: Do not echo input (having this echo enabled and echo on the modem itself gave me double echoes resulting in odd behaviour in my script)
It is also possible to use the combination setting "raw" to set all these parameters the correct way.
-

I want to run a script from another script, use the same version of perl, and reroute IO to a terminal-like textbox

I am somewhat familiar with various ways of calling a script from another one. I don't really need an overview of each, but I do have a few questions. Before that, though, I should tell you what my goal is.
I am working on a perl/tk program that: a) gathers information and puts it in a hash, and b) fires off other scripts that use the info hash, and some command line args. Each of these other scripts are available on the command line (using another command-line script) and need to stay that way. So I can't just put all that into a module and call it good.I do have the authority to alter the scripts, but, again, they must also be usable on the command line.
The current way of calling the other script is by using 'do', which means I can pass in the hash, and use the same version of perl (I think). But all the STDOUT (and STDERR too, I think) goes to the terminal.
Here's a simple example to demonstrate the output:
this_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use Tk;
my $mw = MainWindow->new;
my $button = $mw->Button(
-text => 'start other thing',
-command => \&start,
)->pack;
my $text = $mw->Text()->pack;
MainLoop;
sub start {
my $script_path = 'this_other_thing.pl';
if (not my $read = do $script_path) {
warn "couldn't parse $script_path: $#" if $#;
warn "couldn't do $script_path: $!" unless defined $read;
warn "couldn't run $script_path" unless $read;
}
}
this_other_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
print "Hello World!\n";
How can I redirect the STDOUT and STDIN (for interactive scripts that need input) to the text box using the 'do' method? Is that even possible?
If I can't use the 'do' method, what method can redirect the STDIN and STDOUT, as well as enable passing the hash in and using the same version of perl?
Edit: I posted this same question at Perlmonks, at the link in the first comment. So far, the best response seems to use modules and have the child script just be a wrapper for the module. Other possible solutions are: ICP::Run(3) and ICP in general, Capture::Tiny and associated modules, and Tk::Filehandle. A solution was presented that redirects the output and error streams, but seems to not affect the input stream. It's also a bit kludgy and not recommended.
Edit 2: I'm posting this here because I can't answer my own question yet.
Thanks for your suggestions and advice. I went with a suggestion on Perlmonks. The suggestion was to turn the child scripts into modules, and use wrapper scripts around them for normal use. I would then simply be able to use the modules, and all the code is in one spot. This also ensures that I am not using different perls, I can route the output from the module anywhere I want, and passing that hash in is now very easy.
To have both STDIN & STDOUT of a subprocess redirected, you should read the "Bidirectional Communication with Another Process" section of the perlipc man page: http://search.cpan.org/~rjbs/perl-5.18.1/pod/perlipc.pod#Bidirectional_Communication_with_Another_Process
Using the same version of perl works by finding out the name of your perl interpreter, and calling it explicitly. $^X is probably what you want. It may or may not work on different operating systems.
Passing a hash into a subprocess does not work easily. You can print the contents of the hash into a file, and have the subprocess read & parse it. You might get away without using a file, by using the STDIN channel between the two processes, or you could open a separate pipe() for this purpose. Anyway, printing & parsing the data back cannot be avoided when using subprocesses, because the two processes use two perl interpreters, each having its own memory space, and not being able to see each other's variables.
You might avoid using a subprocess, by using fork() + eval() + require(). In that case, no separate perl interpreter will be involved, the forked interpreter will inherit the whole memory of your program with all variables, open file descriptors, sockets, etc. in it, including the hash to be passed. However, I don't see from where your second perl script could get its hash when started from CLI.

Put output of evaluated file into another file in Nodejs

In my Nodejs script I have a line, called on demand:
eval(fs.readFileSync('eval.js')+'');
It is done so, because sometimes I want to know about what is going on in my "nodejs script", the content of it's variables.
So, "eval.js" usually represented as:
console.dir(myVar);
The problem is, it outputs in console output. "Parent" script also outputs in console some info, so the console is running very fast and I can't get that I want.
I was searching any way to put all output of file "eval.js" into another file "x.log".
Something like (in "parent" script):
evalFileToLog("eval.js", "x.log");
Or "eval.js":
// something what will forward stdout to "x.log"
console.dir(abc);
// blah blah blah
// something that will restore stdout to it's normal behaviour, like it was before.
Thank you for your help!
You can write to another stream (eg a file or standard error) to separate your outputs, but I highly advise against doing synchronous reads of any file in your program.
Another way to get a view into your app would be to leverage the repl with it you can have a shell in your program instead of flooding the output.
var repl = require('repl');
var bar = new MyApp();
relp.start({
foo: bar
});
And now you have bar exposed in your shell.

Resources