Insert content of a file to another file on Linux - linux

I have two files. I want to insert the content of the first file(file1) to the second file (file2) between some codes (second file is a script). For example the second file should look like this
upcode...
#upcode ends here
file1 content
downcode ...
upcode #upcode ends here and downcode should never change.
How this can be done?

You can try sed:
sed -e '/file1 content/{r file1' -e 'd}' file2
/pattern/: pattern to match line
r file1: read file1
d: delete line
Note: you can add -i option to change file2 inplace.

Here is a script to do that (note that your start tag has to be unique in the file)--
#!/bin/bash
start="what you need"
touch file2.tmp
while read line
do
if [ "$line" = "$start" ]
then
echo "$line" >> file2.tmp
cat file2 >> file2.tmp
fi
echo "$line" >> file2.tmp
done < file1
#mv file2.tmp file1 -- moves (i.e. renames) file2.tmp as file1.

while IFS= read -r f2line; do
echo "$f2line"
[[ "$f2line" = "#upcode ends here" ]] && cat file1
done < file2 > merged_file
or to edit file2 in place
ed file2 <<END
/#upcode ends here/ r file1
w
q
END

Related

grep lines containing specific string (a line can be written on max 3 lines)

I need to get all log done in my project.
I'm using this command to do that:
grep -rnw $1 -e "Logger.[view]*;$" >> log.txt
this line return all lines containing Logger.[one of the these caracters]
contained in the project directory "$1" except that there are some lines written on 2 or 3 lines (IDE formating). In this case I get only the first line only.
What can I do to get the complete text of that log knowing that a log line will always end with ");"
example of such line :
Logger.v(xxxxxxxxxxxxx
xxxxxxxxxxxxxxxx);
Here is my script:
#!/bin/bash
echo "Hello Logger!
# get project path
echo "project directory is $1"
# get all project logs and store them into temporary file tmp.txt for processing
grep -rnw $1 -e "Logger.[view]" >> tmp.txt
echo "tmp.txt created successfully"
# remove package name from previous result and store result into log.txt
sed -r 's/.{52}//' tmp.txt >> log.txt
echo "log.txt created successfully"
grep command return file_path/file_name : line_number : line.
I found this command that returns only the line even if it is written in 2 or 3 lines but without the file_path file_name and the line_number
sed -n '/Logger.[viewd]/{:start /;/!{N;b start};/Logger.[viewd]/p}' Main.java
Is there a way to have those two results combined.
example :
/home/xxx/xxx/xxx/Main.java:97:Logger.i(xxxxxxxxxxxxx);
/home/xxx/xxx/xxx/Main.java:106:Logger.d(yyyyyyyyyyyy
yyyyyyyyyyyyyyyyyyyy);
i think that's a break line problem. Try to replace grep -rnw $1 -e "Logger.[view]" >> tmp.txt by the following lines:
for i in `ls $1`;
do
cat $1/$i | tr '\n' ' ' | grep -rnw -e "Logger.[view]" >> tmp.txt
done
Here, tr '\n' ' ' replace the break line by a simple space.
I found a solution for my problem and here is my code:
# get all project logs and store them into log.txt for processing
for i in $(find -name "*.java")
do
echo >> log.txt
echo "**************** file $i ********************************" >> log.txt
echo >> log.txt
grep -rnw Logger.[viewd] $i | while read -r line ; do
# remove breaklines from first line to avoid having bad results
line="$(echo $line | sed $'s/\r//')"
# if first line ends with ");" print it to log file
if [[ ${line: -2} == ");" ]]; then
echo $line >> log.txt
# else get next line also
else
# get second line number
line_number="$(echo "$line" | cut -d : -f1)"
next_line_number=$((line_number+1))
# get second line
next_line=$(sed "${next_line_number}q;d" $i | sed -e 's/^[ \t]*//')
# concatenate first line & second line
line="$line $next_line"
# print resulting line to log file
echo $line >> log.txt
fi
done

Cat several files into one file with the file name before the data

I have several log files with data in them. What i want to do is cat all these files into one file. But before the data goes in i want the file name to be there without the extension. For Example:
Files I have:
file1.log file2.log file3.log
The file that i want: all.log
all.log to have in it:
file1
file1's data
file2
file2's data
file3
file3's data
Using awk
awk 'FNR==1{sub(/[.][^.]*$/, "", FILENAME); print FILENAME} 1' file*.log >all.log
FNR is the file record number. It is one at the beginning of each file. Thus, the test FNR==1 tells us if we are at the beginning of a file. If we are, then we remove the extension from the filename using sub(/[.][^.]*$/, "", FILENAME) and then we print it.
The final 1 in the program is awk's cryptic way of saying print-this-line.
The redirection >all.log saves all the output in file all.log.
Using shell
for f in file*.log; do echo "${f%.*}"; cat "$f"; done >all.log
Or:
for f in file*.log
do
echo "${f%.*}"
cat "$f"
done >all.log
In shell, for f in file*.log; do starts a loop over all files matching the glob file*.log. The statement echo "${f%.*}" prints the file name minus the extension. ${f%.*} is an example of suffix removal. cat "$f" prints the contents of the file. done >all.log terminates the loop and saves all the output in all.log.
This loop will work correctly even if file names contain spaces, tabs, newlines, or other difficult characters.
Suppose you have two files:
foo:
a
b
c
bar:
d
e
f
Using Perl:
perl -lpe 'print $ARGV if $. == 1; close(ARGV) if eof' foo bar > all.log
foo
a
b
c
bar
d
e
f
$. is the line number
$ARGV is the name of the current file
close(ARGV) if eof resets the line number at the end of each file
Using grep:
grep '' foo bar > all.log
foo:a
foo:b
foo:c
bar:d
bar:e
bar:f
for i in `ls file*`; do `echo $i | awk -F"." '{print $1}' >> all.log; cat $i >> all.log`; done

Show uncommon part of the line

Hi I have two files which contain paths. I want to compare the two files and show only uncommon part of the line.
1.txt:
/home/folder_name/abc
2.txt:
/home/folder_name/abc/pqr/xyz/mnp
Output I want:
/pqr/xyz/mnp
How can I do this?
This bit of awk does the job:
$ awk 'NR==FNR {a[++i]=$0; next}
{
b[++j]=$0;
if(length(a[j])>length(b[j])) {t=a[j]; a[j]=b[j]; b[j]=t}
sub(a[j],"",b[j]);
print b[j]
}' 2.txt 1.txt # or 2.txt 1.txt, it doesn't matter
Write the line from the first file to the array a.
Write the line from the second to b.
Swap a[j] and b[j] if a[j] is longer than b[j] (this might not be necessary if the longer text is always in b).
Remove the part found in a[j] from b[j] and print b[j].
This is a general solution; it makes no assumption that the match is at the start of the line, or that the contents of one file's line should be removed from the other. If you can afford to make those assumptions, the script can be simplified.
If the match may occur more than once on the line, you can use gsub rather than sub to perform a global substitution.
Considering you have strings in 1.txt and in 2.txt following code will do.
paste 1.txt 2.txt |
while read a b;
do
if [[ ${#a} -gt ${#b} ]];
then
echo ${a/$b};
else
echo ${b/$a};
fi;
done;
This is how it works on my system,
shiplu#:~/test/bash$ cat 1.txt
/home/shiplu/test/bash
/home/shiplu/test/bash/hello/world
shiplu#:~/test/bash$ cat 2.txt
/home/shiplu/test/bash/good/world
/home/shiplu/test/bash
shiplu#:~/test/bash$ paste 1.txt 2.txt |
> while read a b;
> do
> if [[ ${#a} -gt ${#b} ]];
> then
> echo ${a/$b};
> else
> echo ${b/$a};
> fi;
> done;
/good/world
/hello/world
This script will compare all lines in the file and only output the change in the line.
First it counts the number of lines in the first file.
Then i start a loop that will iterate for the number of lines.
Declare two variable that are the same line from both files.
Compare the lines and if they are the same output that they are.
If they are not then replace duplicate parts of the string with nothing(effectively removing them)
I used : as the seperator in sed as your variables contain /. So if they contain : then you may want to consider changing them.
Probably not the most efficient solution but it works.
#!/bin/bash
NUMOFLINES=$(wc -l < "1.txt")
echo $NUMOFLINES
for ((i = 1 ; i <= $NUMOFLINES ; i++)); do
f1=$(sed -n $i'p' 1.txt)
f2=$(sed -n $i'p' 2.txt)
if [[ $f1 < $f2 ]]; then
echo -n "Line $i:"
sed 's:'"$f1"'::' <<< "$f2"
elif [[ $f1 > $f2 ]]; then
echo -n "Line $i:"
sed 's:'"$f2"'::' <<< "$f1"
else
echo "Line $i: Both lines are the same"
fi
echo ""
done
If you happen to use bash, you could try this one:
echo $(diff <(grep -o . 1.txt) <(grep -o . 2.txt) \
| sed -n '/^[<>]/ {s/^..//;p}' | tr -d '\n')
It does a character-by-character comparison using diff (where grep -o . gives an intermediate line for each character to be fed to line-wise diff), and just prints the differences (intermediate diff output lines starting with markers < or > omitted, then joining lines with tr).
If you have multiple lines in your input (which you did not mention in your question) then try something like this (where % is a character not contained in your input):
diff <(cat 1.txt | tr '\n' '%' | grep -o .) \
<(cat 2.txt | tr '\n' '%' | sed -e 's/%/%%/g' | grep -o .) \
| sed -n '/^[<>]/ {s/^..//;p}' | tr -d '\n' | tr '%' '\n'
This extends the single-line solution by adding line end markers (e.g. %) which diff is forced to include in its output by adding % on the left and %% on the right.
If both the files have always a single line in each, then below works:
perl -lne '$a=$_ if($.==1);print $1 if(/$a(.*)/ && $.==2)' 1.txt 2.txt
Tested Below:
> cat 1.txt
/home/folder_name/abc
> cat 2.txt
/home/folder_name/abc/pqr/xyz/mnp
> perl -lne '$a=$_ if($.==1);print $1 if(/$a(.*)/ && $.==2)' 1.txt 2.txt
/pqr/xyz/mnp
>

insert the contents of a file to another (in a specific line of the file that is sent)-BASH/LINUX

I tried doing it with cat and then after I type the second file I added | head -$line | tail -1 but it doesn't work because it performs cat first.
Any ideas? I need to do it with cat or something else.
I'd probably use sed for this job:
line=3
sed -e "${line}r file2" file1
If you're looking to overwrite file1 and you have GNU sed, add the -i option. Otherwise, write to a temporary file and then copy/move the temporary file over the original, cleaning up as necessary (that's the trap stuff below). Note: copying the temporary over the file preserves links; moving does not (but is swifter, especially if the file is big).
line=3
tmp="./sed.$$"
trap "rm -f $tmp; exit 1" 0 1 2 3 13 15
sed -e "${line}r file2" file1 > $tmp
cp $tmp file1
rm -f $tmp
trap 0
Just for fun, and just because we all love ed, the standard editor, here's an ed version. It's very efficient (ed is a genuine text editor)!
ed -s file2 <<< $'3r file1\nw'
If the line number is stored in the variable line then:
ed -s file2 <<< "${line}r file1"$'\nw'
Just to please Zack, here's one version with less bashism, in case you don't like bash (personally, I don't like pipes and subshells, I prefer herestrings, but hey, as I said, that's only to please Zack):
printf "%s\n" "${line}r file1" w | ed -s file2
or (to please Sorpigal):
printf "%dr %s\nw" "$line" file1 | ed -s file2
As Jonathan Leffler mentions in a comment, and if you intend to use this method in a script, use a heredoc (it's usually the most efficient):
ed -s file2 <<EOF
${line}r file1
w
EOF
Hope this helps!
P.S. Don't hesitate to leave a comment if you feel you need to express yourself about the ways to drive ed, the standard editor.
cat file1 >>file2
will append content of file1 to file2.
cat file1 file2
will concatenate file1 and file2 and send output to terminal.
cat file1 file2 >file3
will create or overwite file3 with concatenation of file1 and file2
cat file1 file2 >>file3
will append concatenation of file1 and file2 to end of file3.
Edit:
For trunking file2 before adding file1:
sed -e '11,$d' -i file2 && cat file1 >>file2
or for making a 500 lines file:
n=$((500-$(wc -l <file1)))
sed -e "1,${n}d" -i file2 && cat file1 >>file2
Lots of ways to do it, but I like to to choose a way that involves making tools.
First, setup test environment
rm -rf /tmp/test
mkdir /tmp/test
printf '%s\n' {0..9} > /tmp/test/f1
printf '%s\n' {one,two,three,four,five,six,seven,eight,nine,ten} > /tmp/test/f2
Now let's make the tool, and in this first pass we'll implement it badly.
# insert contents of file $1 into file $2 at line $3
insert_at () { insert="$1" ; into="$2" ; at="$3" ; { head -n $at "$into" ; ((at++)) ; cat "$insert" ; tail -n +$at "$into" ; } ; }
Then run the tool to see the amazing results.
$ insert_at /tmp/test/f1 /tmp/test/f2 5
But wait, the result is on stdout! What about overwriting the original? No problem, we can make another tool for that.
insert_at_replace () { tmp=$(mktemp) ; insert_at "$#" > "$tmp" ; mv "$tmp" "$2" ; }
And run it
$ insert_at_replace /tmp/test/f1 /tmp/test/f2 5
$ cat /tmp/test/f2
"Your implementation sucks!"
I know, but that's the beauty of making simple tools. Let's replace insert_at with the sed version.
insert_at () { insert="$1" ; into="$2" ; at="$3" ; sed -e "${at}r ${insert}" "$into" ; }
And insert_at_replace keeps working (of course). The implementation of insert_at_replace can also be changed to be less buggy, but I'll leave that as an exercise for the reader.
I like doing this with head and tail if you don't mind managing a new file:
head -n 16 file1 > file3 &&
cat file2 >> file3 &&
tail -n+56 file1 >> file3
You can collapse this onto one line if you like. Then, if you really need it to overwrite file1, do: mv file3 file1 (optionally include && between commands).
Notes:
head -n 16 file1 means first 16 lines of file1
tail -n+56 file1 means file1 starting from line 56 to the end
Hence, I actually skipped lines 17 through 55 from file1.
Of course, if you could change 56 to 17 so no lines are skipped.
I prefer to mix simple head and tail commands then try a magic sed command.

Bash loop to compare files

I'm obviously missing something simply, and know the problem is that it's creating a blank output which is why it can't compare. However if someone could shed some light on this it would be great - I haven't isolated it.
Ultimately, I'm trying to compare the md5sum from a list stored in a txt file, to that stored on the server. If errors, I need it to report that. Here's the output:
root#vps [~/testinggrounds]# cat md5.txt | while read a b; do
> md5sum "$b" | read c d
> if [ "$a" != "$c" ] ; then
> echo "md5 of file $b does not match"
> fi
> done
md5 of file file1 does not match
md5 of file file2 does not match
root#vps [~/testinggrounds]# md5sum file*
2a53da1a6fbfc0bafdd96b0a2ea29515 file1
bcb35cddc47f3df844ff26e9e2167c96 file2
root#vps [~/testinggrounds]# cat md5.txt
2a53da1a6fbfc0bafdd96b0a2ea29515 file1
bcb35cddc47f3df844ff26e9e2167c96 file2
Not directly answering your question, but md5sum(1):
-c, --check
read MD5 sums from the FILEs and check them
Like:
$ ls
1.txt 2.txt md5.txt
$ cat md5.txt
d3b07384d113edec49eaa6238ad5ff00 1.txt
c157a79031e1c40f85931829bc5fc552 2.txt
$ md5sum -c md5.txt
1.txt: OK
2.txt: OK
The problem that you are having is that your inner read is executed in a subshell. In bash, a subshell is created when you pipe a command. Once the subshell exits, the variables $c and $d are gone. You can use process substitution to avoid the subshell:
while read -r -u3 sum filename; do
read -r cursum _ < <(md5sum "$filename")
if [[ $sum != $cursum ]]; then
printf 'md5 of file %s does not match\n' "$filename"
fi
done 3<md5.txt
The redirection 3<md5.txt causes the file to be opened as file descriptor 3. The -u 3 option to read causes it to read from that file descriptor. The inner read still reads from stdin.
I'm not going to argue. I simply try to avoid double read from inside loops.
#! /bin/bash
cat md5.txt | while read sum file
do
prev_sum=$(md5sum $file | awk '{print $1}')
if [ "$sum" != "$prev_sum" ]
then
echo "md5 of file $file does not match"
else
echo "$file is fine"
fi
done

Resources