error reading input file: Key has expired - linux

I am currently making a bash script. The purpose of this script is not important. However, I have a piece of code that is generating an error. The error is as follows:
./script.bs: line 175: read: read error: 0: Key has expired
./script.bs: error reading input file: Key has expired
I have the code below for lines 175-189.
This specific piece of code does the following:
-Reads a txt file, that has a list of targeted files.
-For each targeted file, each line is read. And if that line is contained in $NumbersFile, it will do nothing. If that line is NOT contained in $NumbersFile, it will add that line to NumbersFile.
This general piece of code is working, and added 65810 lines of content to $NumbersFile. It then however got the error I stated above.
I'd like to add that the while loop on line 175 (where the error is happening) is supposed to read about 70'000 lines from the given file.
How do I fix this error so that my script may finish running without a key expired error?
NumbersFile="numbers.txt";
while read line; do
while read gramline; do
has="0";
if grep -Fq -- "$gramline" "$NumbersFile"; then
has="1";
fi
if [ "$has" -eq "0" ]; then
echo "$gramline" >> $NumbersFile;
fi
done < "$line";
done < "targetsfile.txt";

If my comment is accurate, perhaps this might be faster:
{ cat targetsfile.txt; xargs cat < targetsfile.txt; } | sort -u > numbers.txt
Or as clarified:
xargs cat < targetsfile.txt | sort -u > numbers.txt
Notes:
the braces are simply to group the cat and xargs commands so that the combined output can be piped into sort. Documented in the manual at 3.2.4.3 Grouping Commands
The first cat outputs the contents of the "targetsfile.txt" file
the xargs cat < targetsfile.txt construct will execute the cat command for every file listed in the targets file. It's a very concise and efficient way to execute
while IFS= read -r line; do cat "$line"; done < targetsfile.txt

Related

How to use line that read from file in grep command

I'm sorry for my poor English, first.
I want to read a file (tel.txt) that contains many tel numbers (a number per line) and use that line to grep command to search about the specific number in the source file (another file)!
I wrote this code :
dir="/home/mujan/Desktop/data/ADSL_CDR_Text_Parts_A"
file="$dir/tel.txt"
datafile="$dir/ADSL_CDR_Like_Tct4_From_960501_to_97501_Part0.txt"
while IFS= read -r line
do
current="$line"
echo `grep -F $current "$datafile" >> output.txt`
done < $file
the tel file sample :
44001547
44001478
55421487
but that code returns nothing!
when I declare 'current' variable with literals it works correctly!
what happened?!
Your grep command is redirected to write its output to a file, so you don't see it on the terminal.
Anyway, you should probably be using the much simpler and faster
grep -Ff "$file" "$datafile"
Add | tee -a output.txt if you want to save the output to a file and see it at the same time.
echo `command` is a buggy and inefficient way to write command. (echo "`command`" would merely be inefficient.) There is no reason to capture standard output into a string just so that you can echo that string to standard output.
Why don't you search for the line var directly? I've done some tests, this script works on my linux (CentOS 7.x) with bash shell:
#!/bin/bash
file="/home/mujan/Desktop/data/ADSL_CDR_Text_Parts_A/tel.txt"
while IFS= read -r line
do
echo `grep "$line" /home/mujan/Desktop/data/ADSL_CDR_Text_Parts_A/ADSL_CDR_Like_Tct4_From_960501_to_97501_Part0.tx >> output.txt`
done < $file
Give it a try... It shows nothing on the screen since you're redirecting the output to the file output.txt so the matching results are saved there.
You should use file descriptors when reading with while loop.instead use for loop to avoid false re-directions
dir="/home/mujan/Desktop/data/ADSL_CDR_Text_Parts_A"
file="$dir/tel.txt"
datafile="$dir/ADSL_CDR_Like_Tct4_From_960501_to_97501_Part0.txt"
for line in `cat $file`
do
current="$line"
echo `grep -F $current "$datafile" >> output.txt`
done

Copy a txt file twice to a different file using bash

I am trying to cat a file.txt and loop it twice through the whole content and copy it to a new file file_new.txt. The bash command I am using is as follows:
for i in {1..3}; do cat file.txt > file_new.txt; done
The above command is just giving me the same file contents as file.txt. Hence file_new.txt is also of the same size (1 GB).
Basically, if file.txt is a 1GB file, then I want file_new.txt to be a 2GB file, double the contents of file.txt. Please, can someone help here? Thank you.
Simply apply the redirection to the for loop as a whole:
for i in {1..3}; do cat file.txt; done > file_new.txt
The advantage of this over using >> (aside from not having to open and close the file multiple times) is that you needn't ensure that a preexisting output file is truncated first.
Note that the generalization of this approach is to use a group command ({ ...; ...; }) to apply redirections to multiple commands; e.g.:
$ { echo hi; echo there; } > out.txt; cat out.txt
hi
there
Given that whole files are being output, the cost of invoking cat for each repetition will probably not matter that much, but here's a robust way to invoke cat only once:[1]
# Create an array of repetitions of filename 'file' as needed.
files=(); for ((i=0; i<3; ++i)); do files[i]='file'; done
# Pass all repetitions *at once* as arguments to `cat`.
cat "${files[#]}" > file_new.txt
[1] Note that, hypothetically, you could run into your platform's command-line length limit, as reported by getconf ARG_MAX - given that on Linux that limit is 2,097,152 bytes (2MB) that's not likely, though.
You could use the append operator, >>, instead of >. Then adjust your loop count as needed to get the output size desired.
You should adjust your code so it is as follows:
for i in {1..3}; do cat file.txt >> file_new.txt; done
The >> operator appends data to a file rather than writing over it (>)
if file.txt is a 1GB file,
cat file.txt > file_new.txt
cat file.txt >> file_new.txt
The > operator will create file_new.txt(1GB),
The >> operator will append file_new.txt(2GB).
for i in {1..3}; do cat file.txt >> file_new.txt; done
This command will make file_new.txt(3GB),because for i in {1..3} will run three times.
As others have mentioned, you can use >> to append. But, you could also just invoke cat once and have it read the file 3 times. For instance:
n=3; cat $( yes file.txt | sed ${n}q ) > file_new.txt
Note that this solution exhibits a common anti-pattern and fails to properly quote the arguments, which will cause issues if the filename contains whitespace. See mklement's solution for a more robust solution.

While loop in bash using variable from txt file

I am new to bash and writing a script to read variables that is stored on each line of a text file (there are thousands of these variables). So I tried to write a script that would read the lines and automatically output the solution to the screen and save into another text file.
./reader.sh > solution.text
The problem I encounter is currently I have only 1 variable store in the Sheetone.txt for testing purpose which should take about 2 seconds to output everything but it is stuck in the while loop as well as is not outputting the solution.
#!/bin/bash
file=Sheetone.txt
while IFS= read -r line
do
echo sh /usr/local/test/bin/test -ID $line -I
done
As indicated in the comments, you need to provide "something" to your while loop. The while construct is written in a way that will execute with a condition; if a file is given, it will proceed until the read exhausts.
#!/bin/bash
file=Sheetone.txt
while IFS= read -r line
do
echo sh /usr/local/test/bin/test -ID $line -I
done < "$file"
# -----^^^^^^^ a file!
Otherwise, it was like cycling without wheels...

Using while/read/do to pass the content of file as the argument of a command

I'm really new to Linux scripting. I am sure this is simple, but I cannot figure it out.
As part of a script, I am trying to pass the content of a file as arguments of a command in a script:
while read i
do $COMMAND $i
done < file.lst
I want to pass every line of the file.lst as the argument of the command except the very first line of the file. How to I do this?
EDIT:
Here is the section of the script:
while read i
do cp --recursive --preserve=all $i $DIR
done < $DIR/file.lst
while read -r i
do
"$COMMAND" "$i"
done < <(sed -n '2,$p' file.lst)
This solutions does not use a while so I am not entirely sure if it solves your problem, but based on your code sample. you can do the following
tail -n +2 input | xargs echo
This will read all lines from input starting at line 2 and execute echo using the value of the line
the file input contains:
skip
1
2
3
executing that command gives
1
2
3
Just substitute input for the file you want and echo for the command you want
Add an extra read to consume the first line before the while loop begins.
{
read -r;
while read -r i; do
"$COMMAND" "$i"
done
} < file.lst

Read filenames from a text file and then make those files?

My code is given below. Echo works fine. But, the moment I redirect output of echo to touch, I get an error "no such file or directory". Why ? How do i fix it ?
If I copy paste the output of only echo, then the file is created, but not with touch.
while read line
do
#touch < echo -e "$correctFilePathAndName"
echo -e "$correctFilePathAndName"
done < $file.txt
If you have file names in each line of your input file file.txt then you don't need to do any loop. You can just do:
touch $(<file.txt)
to create all the files in one single touch command.
You need to provide the file name as argument and not via standard input. You can use command substitution via $(…) or `…`:
while read line
do
touch "$(echo -e "$correctFilePathAndName")"
done < $file.txt
Ehm, lose the echo part... and use the correct variable name.
while read line; do
touch "$line"
done < $file.txt
try :
echo -e "$correctFilePathAndName" | touch
EDIT : Sorry correct piping is :
echo -e "$correctFilePathAndName" | xargs touch
The '<' redirects via stdin whereas touch needs the filename as an argument. xargs transforms stdin in an argument for touch.

Resources