how to take input from a file and store output to another file - linux

i want input from a file for my user process and store the result in another file . i have done like:
$ ./a.out < inputFile.txt > outputFile.txt
this is working for me . but i m worried about redirection , for example what if inputFile.txt contents got redirected to outputFile.txt and then it is made availble to a.out.
i wanted to know is there any order for evaluation or atleast how the shell interprets the above lines.

You can only redirect live entity to dead entity. You cannot redirect dead entity to live entity.
In your case file is dead entity. and ./a.out is live entity (which is a running process).
You can redirect ./a.out to file, but the reverse is not possible.
$ ./a.out > file #should work
$ ./a.out < file #will execute a.out and will not redirect file to ./a.out

Related

How to optimize reading from a large file in a while loop in shell script

I was going through some random articles on the internet on how to optimize file input to a loop and tried to test things myself. They claim that file descriptor manipulation in most cases can be fast and efficient than directly reading from file into a loop. I tried to test it doing this:
First read from the file directly into the loop:
time while read a ; do :;done < testfile
The time this command took to run is:
real 0m8.782s
user 0m1.292s
sys 0m0.399s
Now I try to do some file-descriptor manipulation as one of the article suggested as this:
I first redirect file -descriptor zero to a file-descriptor 3 like: exec 3<&0
I then redirect testfile to file descriptor 0 : exec 0 < testfile
And at the end of loop I'm reading the data as 0<&3 which would mean redirect file-descriptor 3 to 0. So the complete line is as below:
exec 3<&0;exec 0<testfile; time for i in $(seq 1 20);do while read a; do :;done; done; exec 0<&3
This gives me a time as:
real 0m8.792s
user 0m1.258s
sys 0m0.430s
But I see the time to be almost same in both cases, in fact a tad bit slower when I use file descriptors. The file testfile is 6MB with close to 400k lines each with 20-25 characters max.
In fact for even bigger files, the direct read from files is actually faster than the file descriptor manipulation.
Use C. This is the fastest you can get and if you really care about speed.
You can write you own program to getline() from input stream and then call system on each line. This may be slower because of the fork() and exec() calls, but may be way faster if you can put your line operations into C code.
You can write your own shell builtin. Shells read builting just calls read(), browse bash here. You can write your own shell builtin which loops a command through input faster then the default read builtin. like my_read_bultin 'file' -- 'command to run on each line'.
To make you post reproducible I created a big file:
$ for ((i=0;i<1200000;++i)); do echo ${RANDOM}; done >/tmp/1
$ du -hs /tmp/1
6.5M /tmp/1
Then run:
$ time ( printf '#include<errno.h>\n#include<stdlib.h>\n#include<stdio.h>\nint main(){char*b=0;size_t n=0;ssize_t r;while((r=getline(&b,&n,stdin))>0);if(errno)abort();return 0;}\n' | gcc -Ofast -Wall -xc -o/tmp/a.out -; /tmp/a.out </tmp/1; )
real 0m0.095s
user 0m0.064s
sys 0m0.031s
$ time ( cat >/dev/null; ) </tmp/1
real 0m0.007s
user 0m0.001s
sys 0m0.006s
$ time ( while read l; do :; done </tmp/1; )
real 0m6.994s
user 0m5.222s
sys 0m1.731s
$ time ( exec 3</tmp/1; while read -u3 l; do :; done; )
real 0m7.953s
user 0m5.965s
sys 0m1.949s
$ time xargs -a /tmp/1 -n1 true
< very, very slow, got impatient and CTRL+C it >

redirecting outputs of multiple c file to a single file in terminal

I have a c program. I compiled it using gcc. After running the executable file. I saved the output to a separate file.
$ ./a.out > outputs
Then I compiled another program and ran it. I directed to output to the same output file where it erased the old content and wrote the new content. How do i direct all the outputs to the same file with out erasing the previous content.
output redirection > in the ./a.out > outputs will create a new file(outputs) every time. Instead of that use
./a.out >> outputs
>> will append new data to old one every time.

Very weird redirection behavior

I execute a program which print some texts. I redirect the texts to file by using > but I cannot see any texts on the file. For example, if the program prints "Hello" I can see the result on the shell:
$ ./a.out arg
Hello
But after I redirect I cannot get any message hello on shell as well as the redirected file.
$ ./a.out arg > log.txt
(print nothing)
$ cat log.txt
(print nothing)
I have no idea what's going on. Is there someone who knows what's happening here? Or is there someone who suffered similar situation?
OS: Ubuntu 14.10, x86_64 arch, and the program is really chromium-browser rather than ./a.out. I edited its JavaScript engine (v8, which is included in chromium-browser) and I tried to print some logs with lots of texts. I tried to save it by redirection but it doesn't work.
Surely I checked whether > symbol work or not. It works as expected on other programs like echo, ls, and so on.
$ echo hello > hello.txt
$ cat hello.txt
hello
How can the messages just go away? I think it should be printed on stdout (or stderr) or file. But it just goes away when I use > symbol.
It is somewhat common for programs to check isatty(stdout) and display different output based on whether stdout is connected to a terminal or not. For example, ls will display file names in a tabular format if output is to a terminal, but display them strictly one per line otherwise. It does this to make it easy to parse its output when it's part of a pipeline.
Not having looked at Chrome's source code myself, this is speculation, but it's possible Chrome is performing this sort of check and changing its output based on where stdout is redirected to.
Try to use "2>" which should redirect stderr to file
Or you can also try to use "&>" which should redirect everything (stderr and stdout)
See more at http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html

how to print the ouput/error to a text file?

I'm trying to redirect(?) my standard error/output to a text file.
I did my research, but for some reason the online answers are not working for me.
What am I doing wrong?
cd /home/user1/lists/
for dir in $(ls)
do
(
echo | $dir > /root/user1/$dir" "log.txt
) > /root/Desktop/Logs/Update.log
done
I also tried
2> /root/Desktop/Logs/Update.log
1> /root/Desktop/Logs/Update.log
&> /root/Desktop/Logs/Update.log
None of these work for me :(
Help please!
Try this for the basics:
echo hello >> log.txt 2>&1
Could be read as: echo the word hello, redirecting and appending STDOUT to the file log.txt. STDERR (file descriptor 2) is redirected to wherever STDOUT is being pointed. Note that STDOUT is the default and thus there is no "1" in front of the ">>". Works on the current line only.
To redirect and append all output and error of all commands in a script, put this line near the top. It will be in effect for the length of the script instead of doing it on each line:
exec >>log.txt 2>&1
If you are trying to obtain a list of the files in /home/user1/lists, you do not need a loop at all:
ls /home/usr1/lists/ >Update.log
If you are attempting to run every file in the directory as an executable with a newline as its input, and collect the output from all these programs in Update.log, try this:
for file in /home/user1/lists/*; do
echo | "$file"
done >Update.log
(Notice how we avoid the useless use of ls and how there is no redirection inside the loop.)
If you want to create an empty file called *.log.txt for each file in the directory, you would do
for file in /home/user1/lists/*; do
touch "$(basename "$file")"log.txt
done
(Using basename to obtain the file name without the directory part avoids the cd but you could do it the other way around. Generally, we tend to avoid changing the directory in scripts, so that the tool can be run from anywhere and generate output in the current directory.)
If you want to create a file containing a single newline, regardless of whether it already exists or not,
for file in /home/user1/lists/*; do
echo >"$(basename "$file")"log.txt
done
In your original program, you redirect the echo inside the loop, which means that the redirection after done will not receive any output at all, so the created file will be empty.
These are somewhat wild guesses at what you might actually be trying to accomplish, but should hopefully help nudge you slightly in the right direction. (This should properly be a comment, I suppose, but it's way too long and complex.)

Shell script to call external program which has user-interface

I have an external program, say a.out, which while running asks for an input parameter, i.e.,
./a.out
Please select either 1 or 2:
this will do something
this will do something else
Then when I enter '1', it will do its job. I don't have the code itself but just binary so can't change it.
I want to write a shell script which runs a.out and also inserts '1' in.
I tried many things including silly things like:
./a.out 1
./a.out << 1
./a.out < 1
etc.
but don't work.
Could you please let me know if there is any way to write such as shell script?
Thanks,
dbm368
I think you just need a pipe. For example:
echo 1 | ./a.out
In general terms a pipe takes whatever the program on the left writes to stdout and redirects to the stdin of the program on the right.

Resources