Put output of evaluated file into another file in Nodejs - node.js

In my Nodejs script I have a line, called on demand:
eval(fs.readFileSync('eval.js')+'');
It is done so, because sometimes I want to know about what is going on in my "nodejs script", the content of it's variables.
So, "eval.js" usually represented as:
console.dir(myVar);
The problem is, it outputs in console output. "Parent" script also outputs in console some info, so the console is running very fast and I can't get that I want.
I was searching any way to put all output of file "eval.js" into another file "x.log".
Something like (in "parent" script):
evalFileToLog("eval.js", "x.log");
Or "eval.js":
// something what will forward stdout to "x.log"
console.dir(abc);
// blah blah blah
// something that will restore stdout to it's normal behaviour, like it was before.
Thank you for your help!

You can write to another stream (eg a file or standard error) to separate your outputs, but I highly advise against doing synchronous reads of any file in your program.
Another way to get a view into your app would be to leverage the repl with it you can have a shell in your program instead of flooding the output.
var repl = require('repl');
var bar = new MyApp();
relp.start({
foo: bar
});
And now you have bar exposed in your shell.

Related

How to run a string from an input file as python code?

I am creating something along the likes of a text adventure game. I have a .yaml file that is my input. This file looks something like this
node_type:
action
title:
Do some stuff
info:
This does some stuff and things
script:
'print("hello world")
print(ret_val)
foo.bar(True)
ret_val = (foo.bar() == True)
if (thing):
print(thing)
print(ret_val)
'
My end goal is to have my python program run the script portion of the yaml file exactly as if it had been copy pasted into the main code. (I know there are about ten bazillion security reasons I should not be running user input like this, but I am the only one writing these nodes, and the only one using this program so I'm mostly just ignoring this fact...)
Currently my attempt goes like this: I load my yaml file as a dict using pyyaml
node = yaml.safe_load(file.yaml)
Then I'm trying to use exec to run my code and hitting a lot of problems, I can't run if statements, I simply get a syntax error, and I can't get any sort of return value from my code. I've tried this as a work around:
def main()
ret_val = "test";
thing = exec(node['script'], globals(),locals())
print(ret_val)
which when run with the above .yaml file prints
>> hello world
>> test
>> True
>> test
for some reason not actually modifying any of my main variables even though I fed them to exec.
Is there any way for me to work around these issues or is there an all together better way to be doing this?
One way of doing this would be to parse the code out and save it to a .py file, from which it can be imported dynamically, for example by importlib.
You might want to encapsulate parsed code into a function, which you can then easily call to invoke your action. Also, it would make sense to specify some default imports there.

Node.JS write to stdout without a newline

I have the feeling that is probably not possible:
I am trying to print on the terminal text without a new line.
I have tried process.stdout.write and npm jetty but they all seem to automatically append a new line at the end.
Is it possible to write to stdout without having an automatic newline?
Just to be clear: I am not concerned about browsers, I am only interested in UNIX/Linux writing what in C/C++ would be the equivalent of:
std::cout << "blah";
printf("blah");
process.stdout.write() does not automatically add a new line. If you post precise details about why you think it does, we can probably tell you how you are getting confused, but while console.log() does add a newline, process.stdout.write() has no frills and will not write anything you don't explicitly pass to it.
Here's a shell session providing supporting evidence:
echo 'process.stdout.write("123")' > program.js
node program.js | wc -c
3
According to this link process.stdout.write():
console.log equivalent could look like this:
console.log = function(msg) {
process.stdout.write(`${msg}\n`);
};
So process.stdout.write should meet your request...

learnyounode 'My First I/O' example

This program puzzles me. The goal of this program is to count the number of newlines in a file and output it in command prompt. Learnyounode then runs their own check on the file and sees if their answer matches your answer.
So I start with the answer :
var fs = require('fs');
var filename = process.argv[2];
file = fs.readFileSync(filename);
contents = file.toString();
console.log(contents.split('\n').length - 1);
learnyounode verifies that this program correctly counts the number of new lines. But when I change the program to any of the following, it doesn't print out the same number as learnyounode prints out.
file = fs.readFileSync(C:/Nick/test.txt);
file = fs.readFileSync(test.txt);
Shouldn't nodejs readFileSync be able to input an address and read it correctly?
Lastly, this program is supposed to print out the # of newlines in a program. Why does both the correct program and learnyounode print out the same number that is different from the amount of newlines everytime I run this program?
For example, the number of newlines in test.txt is 3. But running this program prints out a different number everytime, like 45, 15, 2, etc. Yet at the same time, it is verified as a correct program by learnyounode because both their answers match! What is going on?
EDIT:
test.txt looks like this
ok
testing
123
So, I tried your program on my local machine and your program works fine. I am not an expert on learnyounode. I just tried it after your question but I think I understand how it works. As such, here are the answers to your questions:
Shouldn't nodejs readFileSync be able to input an address and read it correctly?
This method from nodejs is working fine. You can try printing the contents of the file and you'll see that there are no problems.
Why does both the correct program and learnyounode print out the same number that is different from the amount of newlines everytime I run this program.
learnyounode is running your program with a different filename as input each time. It verifies the output of your program by running its own copy of correct code against the same file.
But when I change the program to any of the following, it doesn't print out the same number as learnyounode prints out.
That is because at this point, your code is processing a fixed file whereas learnyounode is still processing different files on each iteration.
This tripped me up too. If you read the learnyounode instructions closely they explicitly say...
"The full path to the file to read will be provided as the first command-line argument."
This means they are providing the path to their own file.
When you use process.argv[2], this is passing in the 3rd array item (the learnyounode test txt file) into your script. If you run a console.log(process.argv); you'll see the full array object looks something like this:
[ '/usr/local/bin/node',
'/Users/user/pathstuff/learnyounode/firstio.js',
'/var/folders/41/p2jvc80j26l7nty0sk0zs1z40000gn/T/_learnyounode_1613.txt' ]
The reason the validation numbers begin to mismatch when you substitute your own text file for their is because your file always has 3 lines whereas their unit tests keep passing in different length files via process.argv.
Hope that helps.
when you are using process.argv[2] in learnyounode, the argument is provided by learnyounode automatically, so it prints different number of lines like 45, 15, 2 etc at multiple times verification.
If you remember the second challenge "BABYSTEPS" carefully this was given:
learnyounode will be supplying arguments to your program when you run
learnyounode verify program.js so you don't need to supply them yourself.
That's why different line numbers at program.js verification on multiple times.
there are two different ways.
if you run program like:
node program_name.js
than you need to add path to text file:
node program_name.js text_file.txt
in this case make sure that files are in the same directory.
or you can run it with command:
learnyounode program_name.js
and than default text file will be provided by learnyounode. You can watch content of this text file by using
console.log(buffer)
Problem statement says
The full path to the file to read will be provided as the first
command-line argument.
So you've to pass the path/to/file as an argument.
Remember process.argv
you should use the following method to execute .js files
node program_name.js /path/to/text_file_name
rather than
learnyounode run program_name.js /path/to/text_file_name
on this method, Node.js will run your program with specify files of you enter on the command-line-interface.
wish this answer can help you programming. :)

Running a main function inside another main function on a .hs file changes the behaviour IO?

I'm currently finishing my first Haskell Project and, on the final step of the work, my I/O function seems to behave strangely after I connected the different haskell files.
I have a main file (f1.hs) which loads some info of a multimedia library and saves it into variables on a new .hs file (f2.hs). I also have a "data processing and user interface" file (f3.hs), which reads those variables and, depending on what the user orders, it sorts them and displays them on screen. This f3.hs file works with menus, commanded by the valus of the keyboard input (getLine).
In order to make the work sequence "automatic", I made a "main" function on the f1.hs file, which creates the f2.hs file and then with the System.Cmd module, I did a system "runhaskell f3.hs". This routes the user from the f1.hs file to the main function of f3.hs.
The strange thing is that, after I did that, all the getLine seem to appear before the last line of the function prompt.
What it should appear would be:
Question One.....
Answer: (cursor's place)
but what I get is:
Question One.....
(cursor's place)
Answer:
This only happens if I runhaskell f1.hs. If I try to runhaskell f3.hs directly, it works correctly (though I can't do it on the final job, as the f2.hs file needs to be created first). Am I doing something wrong with this sequence?
I'm sorry for the lack of code, but I thought that it wouldn't be any help for the understanding of the problem...
This is typically caused by line buffering, meaning the text does not actually get printed to the console until a newline is printed. The solution is to manually flush the buffer, i.e. something like this:
import System.IO
main = do ...
putStr "Answer: "
hFlush stdout
...
Alternatively, you can disable buffering by using hSetBuffering stdout NoBuffering, though this comes at a slight performance cost, so I recommend doing the flushing manually when you need to print a partial line.

Perl program structure for parsing

I've got question about program architecture.
Say you've got 100 different log files with different formats and you need to parse and put that info into an SQL database.
My view of it is like:
use general config file like:
program1->name1("apache",/var/log/apache.log) (modulename,path to logfile1)
program2->name2("exim",/var/log/exim.log) (modulename,path to logfile2)
....
sqldb->configuration
use something like a module (1 file per program) type1.module (regexp, logstructure(somevariables), sql(tables and functions))
fork or thread processes (don't know what is better on Linux now) for different programs.
So question is, is my view of this correct? I should use one module per program (web/MTA/iptablat)
or there is some better way? I think some regexps would be the same, like date/time/ip/url. What to do with that? Or what have I missed?
example: mta exim4 mainlog
2011-04-28 13:16:24 1QFOGm-0005nQ-Ig
<= exim#mydomain.org.ua** H=localhost
(exim.mydomain.org.ua)
[127.0.0.1]:51127 I=[127.0.0.1]:465
P=esmtpsa
X=TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32
CV=no A=plain_server:spam S=763
id=1303985784.4db93e788cb5c#mydomain.org.ua T="test" from
<exim#exim.mydomain.org.ua> for
test#domain.ua
everything that is bold is already parsed and will be putted into sqldb.incoming table. now im having structure in perl to hold every parsed variable like $exim->{timstamp} or $exim->{host}->{ip}
my program will do something like tail -f /file and parse it line by line
Flexability: let say i want to add supprot to apache server (just timestamp userip and file downloaded). all i need to know what logfile to parse, what regexp shoud be and what sql structure should be. So im planning to have this like a module. just fork or thread main process with parameters(logfile,filetype). Maybe further i would add some options what not to parse (maybe some log level is low and you just dont see mutch there)
I would do it like this:
Create a config file that is formatted like this: appname:logpath:logformatname
Create a collection of Perl class that inherit from a base parser class.
Write a script which loads the config file and then loops over its contents, passing each iteration to its appropriate handler object.
If you want an example of steps 1 and 2, we have one on our project. See MT::FileMgr and MT::FileMgr::* here.
The log-monitoring tool wots could do a lot of the heavy lifting for you here. It runs as a daemon, watching as many log files as you could want, running any combination of perl regexes over them and executing something when matches are found.
I would be inclined to modify wots itself (which its licence freely allows) to support a database write method - have a look at its existing handle_* methods.
Most of the hard work has already been done for you, and you can tackle the interesting bits.
I think File::Tail is a nice fit.
You can make an array of File::Tail objects and poll them with select like this:
while (1) {
($nfound,$timeleft,#pending)=
File::Tail::select(undef,undef,undef,$timeout,#files);
unless ($nfound) {
# timeout - do something else here, if you need to
} else {
foreach (#pending) {
# here you can handle log messages depending on filename
print $_->{"input"}." (".localtime(time).") ".$_->read;
}
(from perl File::Tail doc)

Resources