When I execute the program in console I just do this:
./c1 2500
textfile.txt
and it just print a integer. The thing here is that I want to introduce 1000 textfiles as input so I made this script:
c=1
while [ $c -le 1000 ]
do
./c1 2500 >> sal.txt
$c.txt
(( c++ ))
done
The trouble here is that the script is not putting the output in the file text because is not iterating as it should, I think the problem is when the name of the filetext is introduced as $c.txt, how can i solve this?
Thanks for reading
$c.txt is not a command and the bash interpreter can't understand what that means
if you want to create a file, use touch [file]
or you want to copy a existing file to the destination, use cp [src_file] [dst_file]
so the code may like this:
./c1 2500 > $c.txt
or you may want to append the result to a file:
./c1 2500 > $c.txt
cat $c.txt >> sal.txt
ps:
> and >> both of these operators represent output redirection
> writes the output to the file
>> appends the output to the file
cat concatenates files and print on the standard output
Related
I am trying to cat a file.txt and loop it twice through the whole content and copy it to a new file file_new.txt. The bash command I am using is as follows:
for i in {1..3}; do cat file.txt > file_new.txt; done
The above command is just giving me the same file contents as file.txt. Hence file_new.txt is also of the same size (1 GB).
Basically, if file.txt is a 1GB file, then I want file_new.txt to be a 2GB file, double the contents of file.txt. Please, can someone help here? Thank you.
Simply apply the redirection to the for loop as a whole:
for i in {1..3}; do cat file.txt; done > file_new.txt
The advantage of this over using >> (aside from not having to open and close the file multiple times) is that you needn't ensure that a preexisting output file is truncated first.
Note that the generalization of this approach is to use a group command ({ ...; ...; }) to apply redirections to multiple commands; e.g.:
$ { echo hi; echo there; } > out.txt; cat out.txt
hi
there
Given that whole files are being output, the cost of invoking cat for each repetition will probably not matter that much, but here's a robust way to invoke cat only once:[1]
# Create an array of repetitions of filename 'file' as needed.
files=(); for ((i=0; i<3; ++i)); do files[i]='file'; done
# Pass all repetitions *at once* as arguments to `cat`.
cat "${files[#]}" > file_new.txt
[1] Note that, hypothetically, you could run into your platform's command-line length limit, as reported by getconf ARG_MAX - given that on Linux that limit is 2,097,152 bytes (2MB) that's not likely, though.
You could use the append operator, >>, instead of >. Then adjust your loop count as needed to get the output size desired.
You should adjust your code so it is as follows:
for i in {1..3}; do cat file.txt >> file_new.txt; done
The >> operator appends data to a file rather than writing over it (>)
if file.txt is a 1GB file,
cat file.txt > file_new.txt
cat file.txt >> file_new.txt
The > operator will create file_new.txt(1GB),
The >> operator will append file_new.txt(2GB).
for i in {1..3}; do cat file.txt >> file_new.txt; done
This command will make file_new.txt(3GB),because for i in {1..3} will run three times.
As others have mentioned, you can use >> to append. But, you could also just invoke cat once and have it read the file 3 times. For instance:
n=3; cat $( yes file.txt | sed ${n}q ) > file_new.txt
Note that this solution exhibits a common anti-pattern and fails to properly quote the arguments, which will cause issues if the filename contains whitespace. See mklement's solution for a more robust solution.
For testing purposes I have to create a file with 1000 lines in it with one command.
What is a command to create a file on Linux?
touch is usually used to create empty files, but if you want to create a non-empty file, just redirect the output of some command to that file, like in the first line of this example:
$ echo hello world > greeting.txt
$ cat greeting.txt
hello world
A way to create a file with 1000 lines would be:
$ seq 1000 > file
for x in `seq 1 1000`; do echo "sometext" $x >>file.txt; done
you use the above for loop use print something $c times in text.txt.$c is number of runnning time (1,2,4...10000)
for (( c=1; c<=1000; c++ ));do echo "something $c times">>test.txt ;done
I need to automate a process using a script and generate output files similar to the name of the input files but with some additions to it.
my process is a Java code. two input arguments and two output arguments.
java #process_class# abc.txt abd.txt abc.1.out abd.a.out
If i want to iterate this for the set of text files in my folder how can i do this
If you have the files a.txt, b.txt, and c.txt in the directory in which this is run, this program will output a_2.txt, b_2.txt, and c_2.txt with foo appended to each (replace the foo line with your processing commands).
for f in *.txt;
do f2=${f%.*}_2.txt;
cp $f $f2;
echo "Processing $f2 file...";
echo "foo" >> $f2; # Your processing command here
done
VAR="INPUTFILENAME"
# One solution this does not use the VAR:
touch INPUTFILENAME{1,2,3,4,5,6,7,8,9,10}
# Another
for i in `seq 1 20` ; do
touch "${VAR}${i}"
done
And there are several other ways.
I have several(60,000) files in a folder that need to be combined into 3 separate files.
How would I cat this so that I could have each file containing the contents of ~20,000 of these files?
I know it would be like a loop:
for i in {1..20000}
do
cat file-$i > new_file_part_1
done
Doing:
cat file-$i > new_file_part_1
Will truncate new_file_part_1 every time the loop iterates. You want to append to the file:
cat file-$i >> new_file_part_1
The other answers close and open the file on every iteration. I would prefer
for i in {1..20000}
do
cat file-$i
done > new_file_part_1
so the output of all cat runs are piped into one file opend once for all.
Assuming it doesn't matter which input file goes to which output file:
for i in {1..60000}
do
cat file$i >> out$(($i % 3))
done
This script uses the modulo operator % to divide the input into 3 bins; it will generate 3 output files:
out0 contains file3, file6, file9, ...
out1 contains file1, file4, file7, ...
out2 contains file2, file5, file8, ...
#!/bin/bash
cat file-{1..20000} > new_file_part_1
This launches cat only once and opens and closes the output file only once. No loop required, since cat can accept all 20000 arguments.
An astute observer noted that on some systems, the 20000 arguments may exceed the system's ARG_MAX limit. In such a case, xargs can be used, with the penalty that cat will be launched more than once (but still significantly fewer than 20000 times).
echo file-{1..20000} | xargs cat > new_file_part_1
This works because, in Bash, echo is a shell built-in and as such is not subject to ARG_MAX.
I have a text file, say, input.txt and I want to run a command and write the output to another text file, say, output.txt. I need to read values from input.txt, each value is in a line, then I need to insert them in the command then write the result in output.txt file. I tried the following and it works fine with me:
for i in `cat input.txt`; do command -m $i -b 100; echo $i; >> output.txt; done
Now, I need to make some improvements over this but I have little experience in Linux so I need some help.
What I need to do is:
1) Before each command result, I want to insert the value of i separated by comma. For example:
i1,result1
i2,result2
i3,result3
2) I need to change the second fixed value that I used in my command from a fixed value (100) to a value read from input.txt. So, the new input file which contains two values, say, newinput.txt is as the following:
i1,value1
i2,value2
i3,value3
Try this, in bash:
IFS=','
while read i val; do
echo -n "$i,"
command $i $val
done < input.txt > output.txt