How do I change stdin Stream with Eclipse run command? - linux

I'm working on a project where rather than using command line arguments I am supposed to just pipe a file to stdin and pipe output to another file.
ie I need to run something like:
./program < infile.txt > outfile.txt
Is there a way to specify to Eclipse this is the command I want when it is run?
I am running the Eclipse-C/C++ environment.

After using Eclipse for some time, it seems like there is not a straightforward way of doing this. I've resorted to just adding debug code to specify a file and then declare it as an input stream.

Related

Redirect bash script output in editor without saving file

I have a script that outputs a few lines and I want to have it open in an editor on my system (nano, vim, whatever) to be ready to edit. However I don't want to save it to a temporary file or variable but directly in the editor (I think I need to bring the out somehow into the editor's buffer).
I naively have tried something like this (I was thinking I could pipe the output into nano somehow):
echo "test" | nano # throws Too many errors from stdin
Alternatively I thought of opening the output like it's a file:
nano $(echo "test")
It opens only the editor but it's empty. Is this even possible or is a temp file strictly necessary?
You can do
echo 'test' | nano -
The - option tells nano to read from stdin rather than a file. This is an editor specific solution though. A more robust solution would require temporary files, which you can use mktemp for.
You could start vi without a filename, and then do:
!!ls -l
or
!!yourScript.sh
Just for completeness, if you want to go the other way, and pass a vi buffer full of data to an external script/command, such as sort or fmt, you can do:
gg!Gsort # Go to top of file (gg), submit entire buffer (G) to "sort"

Pipe Command Not Working In Windows

type coverage/lcov.info | coveralls/bin/coveralls.js
I used above command in appveyor.yml file for generating coverage in coveralls, but there is one problem,Pipe Command is not working in Windows. Please suggest an alternative command for Windows.
What did you expect to happen? Javascript files such as coveralls.js are not an executable file type on Windows. So while pipe does work on Windows, the text to the right of the pipe needs to be an executable command.
Maybe you meant something like this:
jscript coveralls/bin/coveralls.js < coverage/lcov.info

Can I run a node child process with node script from a buffer and not from another file?

Currently I am generating a node script in a string/buffer from a parent node script. I am then writing the generate script in a file and executing that file in a child process. Finally I unlink the temp scriptfile.
Here is the simplified code:
fs.writeFileSync(scriptFile, program_code);
script_output = child_process.execSync("node " + scriptFile).toString('utf8');
fs.unlinkSync(scriptFile);
Now that I have the output, I continue further processing. This works fine.
However, I feel that the overhead of writing to a temp file could be avoided if I can somehow run the node script from the in-memory buffer that I have rather than writing it to a file first.
I looked at the child_process module of node but did not find any way to do this. May be I missed something or is there some other way to do this? I am looking for something like this:
script_output = child_process.execSync("node " + ...).toString('utf8')
Perhaps some shell programming trick? I looked at the -e and the -i option to node and that did not cut it. I tried the << here document shell feature with the -i option but the < prompt of the REPL mode appears to create a syntax issue.
Look into Node's VM API. I haven't actually tried this, but it looks like vm.runInNewContext() might do what you need without launching a new copy of Node.

What can I use to capture every command I run in bash (a-la history)

I know history will capture commands that I run, but it is shell specific. I work with multiple shells and multiple hosts and would like to write a small script which, after every command I run, dumps that command to some file along with the host name. This way, i can implement my own history command which reads from that file, and can take a host as an argument which would be handy for me. I'm not sure how to get the first part though..i.e., get every shell command I type to trigger a "dump that command into a file" part. Any ideas?
Thanks
In bash, the PROMPT_COMMAND environment variable contains a command that will be executed before the PS1 prompt is displayed. So yours could be something like history | tail -n1 | perl -npe 's/^\s+\d+\s+//' | yourcommand HOST
The script utility should solve your problem. It records everything you type and all that is printed on the terminal in a file (even including terminal control codes, so if you cat that file on the console, you even reproduce the original text colors).

Limit output of all Linux commands

I'm looking for a way to limit the amount of output produced by all command line programs in Linux, and preferably tell me when it is limited.
I'm working over a server which has a lag on the display. Occasionally I will accidentally run a command which outputs a large amount of text to the terminal, such as cat on a large file or ls on a directory with many files. I then have to wait a while for all the output to be printed to the terminal.
So is there a way to automatically pipe all output into a command like head or wc to prevent too much output having to be printed to terminal?
I don't know about the general case, but for each well-known command (cat, ls, find?)
you could do the following:
hardlink a copy to the existing utility
write a tiny bash function that calls the utility and pipes to head (or wc, or whatever)
alias the name of the utility to call your function.
So along these lines (utterly untested):
$ ln `which cat` ~/bin/old_cat
function trunc_cat () {
`old_cat $# | head -n 100`
}
alias cat=trunc_cat
Making aliases of all your commands would be a good start. Something like
alias lm="ls -al | more"
alias cam="cat $# | more"
Perhaps using screen could help?
this makes me think of bash-completion.
As complete command in bash enables you to specify handler when a program is not found,
what about write your own handler and clear $PATH, in order to execute every command with redirection to a filtering pipe?
#Did not try it myself.
Assuming you're working over a network connection, like ssh, into a remote server then try piping the output of the command to less. That way you can manage and navigate the output from the program on the server better. Use 'j' and 'k' to move up and down per line and 'ctrl-u' and 'ctrl-d' to move 1/2 a page up and down. When you do this only the relevant text (i.e. what fits on the screen) will be transmitted over the network.

Resources