Randomly shuffling lines in Linux / Bash - linux

I have some files in linux. For example 2 and i need shuffling the files in one file.
For example
$cat file1
line 1
line 2
line 3
line 4
line 5
line 6
line 7
line 8
and
$cat file2
linea one
linea two
linea three
linea four
linea five
linea six
linea seven
linea eight
And later that i shuffling the two files i can obtain something like:
linea eight
line 4
linea five
line 1
linea three
line 8
linea seven
line 5
linea two
linea one
line 2
linea four
line 7
linea six
line 1
line 6

You should use shuf command =)
cat file1 file2 | shuf
Or with Perl :
cat file1 file2 | perl -MList::Util=shuffle -wne 'print shuffle <>;'

Sort: (similar lines will be put together)
cat file1 file2 | sort -R
Shuf:
cat file1 file2 | shuf
Perl:
cat file1 file2 | perl -MList::Util=shuffle -e 'print shuffle<STDIN>'
BASH:
cat file1 file2 | while IFS= read -r line
do
printf "%06d %s\n" $RANDOM "$line"
done | sort -n | cut -c8-
Awk:
cat file1 file2 | awk 'BEGIN{srand()}{printf "%06d %s\n", rand()*1000000, $0;}' | sort -n | cut -c8-

Just a note to OS X users who use MacPorts: the shuf command is part of coreutils and is installed under name gshuf.
$ sudo port install coreutils
$ gshuf example.txt # or cat example.txt | gshuf

You don't need to use pipes here. Sort alone does this with the file(s) as parameters. I would just do
sort -R file1
or if you have multiple files
sort -R file1 file2

Here's a one-liner that doesn't rely on shuf or sort -R, which I didn't have on my mac:
while read line; do echo $RANDOM $line; done < my_file | sort -n | cut -f2- -d' '
This iterates over all the lines in my_file and reprints them in a randomized order.

I would use shuf too.
another option, gnu sort has:
-R, --random-sort
sort by random hash of keys
you could try:
cat file1 file2|sort -R

This worked for me. It employs the Fisher-Yates shuffle.
randomize()
{
arguments=("$#")
declare -a out
i="$#"
j="0"
while [[ $i -ge "0" ]] ; do
which=$(random_range "0" "$i")
out[j]=${arguments[$which]}
arguments[!which]=${arguments[i]}
(( i-- ))
(( j++ ))
done
echo ${out[*]}
}
random_range()
{
low=$1
range=$(($2 - $1))
if [[ range -ne 0 ]]; then
echo $(($low+$RANDOM % $range))
else
echo "$1"
fi
}

It is clearly biased rand (like half the time the list will start with the first line) but for some basic randomization with just bash builtins I guess it is fine? Just print each line yes/no then print the rest...
shuffle() {
local IFS=$'\n' tail=
while read l; do
if [ $((RANDOM%2)) = 1 ]; then
echo "$l"
else
tail="${tail}\n${l}"
fi
done < $1
printf "${tail}\n"
}

Related

Fastest way to compare hundreds of thousands of files, and create output results file in bash

I have the following:
-Values File, values.txt
-Directory Structure: ./dataset/label/author/files.txt
-Tens of thousands of files.txt's
-A file called targets.txt, which contains the location of every files.txt
Example targets.txt
./dataset/tallperson/Jabba/awesome.txt
./dataset/fatperson/Detox/toxic.txt
I have a file called values.txt, which contains hundreds of thousands of lines of values. These values are things like "aef", "; i", "jfk", etc. Random 3-Character lines.
I also have tens of thousands of files, each which also contain hundreds to thousands of lines. Each line also contains Random 3-Character lines.
The values.txt was created using the values of each files.txt. Therefore, there is no value in any file.txt file which isn't contained in values.txt. values.txt contains NO repeating values.
Example:
./dataset/weirdperson/Crooked/file1.txt
LOL
hel
lo
how
are
you
on
thi
s f
ine
day
./dataset/awesomeperson/Mild/file2.txt
I a
m v
ery
goo
d.
Tha
nks
LOL
values.txt
are
you
on
thi
s f
ine
day
goo
d.
Tha
hel
lo
how
I a
m v
ery
nks
LOL
The above is just example data. Each file will contain hundreds of lines. And values.txt will contain hundreds of thousands of lines.
My goal here is to make one file, where each line is a file. Each line will contain N values where each value is correspondant to the line in values.txt. And each value will be seperated by a comma. Each value is calculated simply by how many times each file contains the value of each line in values.txt.
The result should look something like this. With line 1 being file1.txt and line 2 being file2.txt.
Result.txt
1,1,1,1,1,1,1,0,0,0,1,1,1,0,0,0,0,1,
0,0,0,0,0,0,0,1,1,1,0,0,0,1,1,1,1,1,
Now. The last thing is, after getting this result I would like to add a label. The label is equivalent to the Nth parent directory from the file. For this example, lets say the 2nd parent directory. Therefore the label would be "tallperson" or "shortperson". As a result, the new Results.txt file would look like this.
Results.txt
1,1,1,1,1,1,1,0,0,0,1,1,1,0,0,0,0,1,weirdperson
0,0,0,0,0,0,0,1,1,1,0,0,0,1,1,1,1,1,awesomeperson
I would like a way to accomplish all of this, but I need it to be fast as I am working with a very large scale dataset.
This is my current code, but it's too slow. The bottleneck is line 2.
Script. Each file located at "./dataset/label/author/file.java"
1 while IFS= read file_name; do
2 cat values.txt | xargs -d '\n' -I {} grep -Fc -- "{}" "$file_name" | xargs printf "%d," >> Results.txt;
3 label=$(echo "$file_name" | cut -d '/' -f 3);
4 printf "$label\n" >> Results.txt;
5 done < targets.txt
------------
To REPLICATE this problem. Do the following:
mkdir -p dataset/{label1,label2}
touch file1.txt; chmod 777 file1.txt
touch file2.txt; chmod 777 file2.txt
echo "Enter anything here" > file1.txt
echo "Enter something here too" > file2.txt
mv file1.txt ./dataset/label1
mv file2.txt ./dataset/label2
find ./dataset/ -type f -name "*.txt" | while IFS= read file_name; do cat $file_name | sed -e "s/.\{3\}/&\n/g" | sort -u > $modified-file_name; done
find ./dataset/ -type f -name "modified-*.txt" | xargs -d '\n' -I {} echo {} >> targets.txt
xargs cat < targets.txt | sort -u > values.txt
With the above UNCHANGED, you should get a values.txt with something similar to below. If there's any lines with less or more than 3 characters for some reason, please delete the line.
any
e
Ent
er
eth
he
her
ing
ng
re
som
thi
too
You should get a targets.txt file
./dataset/label2/modified-file2.txt
./dataset/label1/modified-file1.txt
From here. The goal is to check every file in targets.txt, and count how many values the file has contained in values.txt. And to output the results with the label to Results.txt
The following script will work for this example, but I need it to be way faster for large scale operations.
while IFS= read file_name; do
cat values.txt | xargs -d '\n' -I {} grep -Fc -- "{}" $file_name | xargs printf "%d," >> Results.txt;
label=$(echo "$file_name" | cut -d '/' -f 3);
printf "$label\n" >> Results.txt;
done < targets.txt
Here's another example
Example 2:
./dataset/weirdperson/Crooked/file1.txt
LOL
LOL
HAHA
./dataset/awesomeperson/Mild/file2.txt
LOL
LOL
LOL
values.txt
LOL
HAHA
Result.txt
2,1,weirdperson
3,0,awesomeperson
Here's a solution in Python, using its ordered dictionary datatype.
import os
from collections import OrderedDict
# read samples from values.txt into an Ordered Dict.
# each dict key is a line from the file
# (including the trailing newline, but that doesn't matter)
# each dict value is 0
with open('values.txt', 'r') as f:
samplecount0=OrderedDict((sample, 0) for sample in f.readlines())
# get list of filenames from targets.txt
with open('targets.txt', 'r') as f:
targets=[t.rstrip('\n') for t in f.readlines()]
# for each target,
# read its lines of samples
# increment the corresponding count in samplecount
# print out samplecount in a single line separated by commas
# each line also has the 2nd-to-last directory component of the target's pathname
for target in targets:
with open(target, 'r') as f:
# copy samplecount0 to samplecount so we don't have to read the values.txt file again
samplecount=samplecount0.copy()
# for each sample in the target file, increment the samplecount dict entry
for tsample in f.readlines():
samplecount[tsample] += 1
output = ','.join(str(v) for v in samplecount.values())
output += ',' + os.path.basename(os.path.dirname(os.path.dirname(target)))
print(output)
Output:
$ python3 doit.py
1,1,1,1,1,1,1,0,0,0,1,1,1,0,0,0,0,1,weirdperson
0,0,0,0,0,0,0,1,1,1,0,0,0,1,1,1,1,1,awesomeperson
Try this:
<targets.txt xargs -n1 -P4 bash -c "
awk 'NR==FNR{a[\$0];next} {if (\$0 in a) {printf \"1,\"} else {printf \"0,\"}}' \"\$1\" values.txt |
sed $'s\x01$\x01'\"\$(<<<\"\$1\" cut -d/ -f3)\"'\n'$'\x01'
" --
The -P4 let's you parallelize the jobs in targets.txt. The short awk script marges lines and prints 0 and 1 followed by a comma. Then sed is used to append the 3rd part of the folder path to the end of the line. The sed line looks strange, because I used unprintable character $'\x01' as the separator for s command.
Tested with:
mkdir -p ./dataset/weirdperson/Crooked
cat <<EOF >./dataset/weirdperson/Crooked/file1.txt
LOL
hel
lo
how
are
you
on
thi
s f
ine
day
EOF
mkdir -p ./dataset/awesomeperson/Mild/
cat <<EOF >./dataset/awesomeperson/Mild/file2.txt
I a
m v
ery
goo
d.
Tha
nks
LOL
EOF
cat <<EOF >values.txt
are
you
on
thi
s f
ine
day
goo
d.
Tha
hel
lo
how
I a
m v
ery
nks
LOL
EOF
cat <<EOF >targets.txt
./dataset/weirdperson/Crooked/file1.txt
./dataset/awesomeperson/Mild/file2.txt
EOF
measure_start() {
declare -g ttic_start
echo "==> Test $* <=="
ttic_start=$(date +%s.%N)
}
measure_end() {
local end
end=$(date +%s.%N)
local start
start="$ttic_start"
ttic_runtime=$(python -c "print(${end} - ${start})")
echo "Runtime: $ttic_runtime"
echo
}
measure_start original
while IFS= read file_name; do
cat values.txt | xargs -d '\n' -I {} grep -Fc -- "{}" $file_name | xargs printf "%d,"
label=$(echo "$file_name" | cut -d '/' -f 3);
printf "$label\n"
done < targets.txt
measure_end
measure_start first try with bash
nl -w1 values.txt | sort -k2.2 > values_sorted.txt
< targets.txt xargs -n1 -P0 bash -c "
sort -t$'\t' \"\$1\" |
join -t$'\t' -12 -21 -eEMPTY -a1 -o1.1,2.1 values_sorted.txt - |
sort -s -n -k1.1 |
sed 's/.*\tEMPTY/0/;t;s/.*/1/' |
tr '\n' ',' |
sed $'s\x01$\x01'\"\$(<<<\"\$1\" cut -d/ -f3)\"'\n'$'\x01'
" --
measure_end
measure_start second try with awk
<targets.txt xargs -n1 -P0 bash -c "
awk 'NR==FNR{a[\$0];next} {if (\$0 in a) {printf \"1,\"} else {printf \"0,\"}}' \"\$1\" values.txt |
sed $'s\x01$\x01'\"\$(<<<\"\$1\" cut -d/ -f3)\"'\n'$'\x01'
" --
measure_end
Outputs:
==> Test original <==
1,1,1,1,1,1,1,0,0,0,1,1,1,0,0,0,0,1,weirdperson
0,0,0,0,0,0,0,1,1,1,0,0,0,1,1,1,1,1,awesomeperson
Runtime: 0.133769512177
==> Test first try with bash <==
0,0,0,0,0,0,0,1,1,1,0,0,0,1,1,1,1,1,awesomeperson
1,1,1,1,1,1,1,0,0,0,1,1,1,0,0,0,0,1,weirdperson
Runtime: 0.0322473049164
==> Test second try with awk <==
0,0,0,0,0,0,0,1,1,1,0,0,0,1,1,1,1,1,awesomeperson
1,1,1,1,1,1,1,0,0,0,1,1,1,0,0,0,0,1,weirdperson
Runtime: 0.0180222988129

Read multiple arguments per line from file and do arithmetic with shell script

I have a file called input.txt:
A 1 2
B 3 4
Each line of this file means A=1*2=2 and B=3*4=12...
So I want to output such calculation to a file output.txt:
A=2
B=12
And I want to use shell script calculate.sh to finish this task:
#!/bin/bash
while read name; do
$var1=$(echo $name | cut -f1)
$var2=$(echo $name | cut -f2)
$var3=$(echo $name | cut -f3)
echo $var1=(expr $var2 * $var3)
done
and I type:
cat input.txt | ./calculate.sh > output.txt
But my approach doesn't work. How to get this task done right?
I would use awk.
$ awk '{print $1"="$2*$3}' file
A=2
B=12
Use output redirection operator to store the output to another file.
awk '{print $1"="$2*$3}' file > outfile
In BASH you can do:
while read -r a m n; do printf "%s=%d\n" $a $((m*n)); done < input.txt > output.txt
cat output.txt
A=2
B=12
calculate.sh:
#!/bin/bash
while read a b c; do
echo "$a=$((b*c))"
done
bash calculate.sh < input.txt outputs:
A=2
B=12
For bash doing math requires double parenthesis:
echo "$((3+4))"

Show uncommon part of the line

Hi I have two files which contain paths. I want to compare the two files and show only uncommon part of the line.
1.txt:
/home/folder_name/abc
2.txt:
/home/folder_name/abc/pqr/xyz/mnp
Output I want:
/pqr/xyz/mnp
How can I do this?
This bit of awk does the job:
$ awk 'NR==FNR {a[++i]=$0; next}
{
b[++j]=$0;
if(length(a[j])>length(b[j])) {t=a[j]; a[j]=b[j]; b[j]=t}
sub(a[j],"",b[j]);
print b[j]
}' 2.txt 1.txt # or 2.txt 1.txt, it doesn't matter
Write the line from the first file to the array a.
Write the line from the second to b.
Swap a[j] and b[j] if a[j] is longer than b[j] (this might not be necessary if the longer text is always in b).
Remove the part found in a[j] from b[j] and print b[j].
This is a general solution; it makes no assumption that the match is at the start of the line, or that the contents of one file's line should be removed from the other. If you can afford to make those assumptions, the script can be simplified.
If the match may occur more than once on the line, you can use gsub rather than sub to perform a global substitution.
Considering you have strings in 1.txt and in 2.txt following code will do.
paste 1.txt 2.txt |
while read a b;
do
if [[ ${#a} -gt ${#b} ]];
then
echo ${a/$b};
else
echo ${b/$a};
fi;
done;
This is how it works on my system,
shiplu#:~/test/bash$ cat 1.txt
/home/shiplu/test/bash
/home/shiplu/test/bash/hello/world
shiplu#:~/test/bash$ cat 2.txt
/home/shiplu/test/bash/good/world
/home/shiplu/test/bash
shiplu#:~/test/bash$ paste 1.txt 2.txt |
> while read a b;
> do
> if [[ ${#a} -gt ${#b} ]];
> then
> echo ${a/$b};
> else
> echo ${b/$a};
> fi;
> done;
/good/world
/hello/world
This script will compare all lines in the file and only output the change in the line.
First it counts the number of lines in the first file.
Then i start a loop that will iterate for the number of lines.
Declare two variable that are the same line from both files.
Compare the lines and if they are the same output that they are.
If they are not then replace duplicate parts of the string with nothing(effectively removing them)
I used : as the seperator in sed as your variables contain /. So if they contain : then you may want to consider changing them.
Probably not the most efficient solution but it works.
#!/bin/bash
NUMOFLINES=$(wc -l < "1.txt")
echo $NUMOFLINES
for ((i = 1 ; i <= $NUMOFLINES ; i++)); do
f1=$(sed -n $i'p' 1.txt)
f2=$(sed -n $i'p' 2.txt)
if [[ $f1 < $f2 ]]; then
echo -n "Line $i:"
sed 's:'"$f1"'::' <<< "$f2"
elif [[ $f1 > $f2 ]]; then
echo -n "Line $i:"
sed 's:'"$f2"'::' <<< "$f1"
else
echo "Line $i: Both lines are the same"
fi
echo ""
done
If you happen to use bash, you could try this one:
echo $(diff <(grep -o . 1.txt) <(grep -o . 2.txt) \
| sed -n '/^[<>]/ {s/^..//;p}' | tr -d '\n')
It does a character-by-character comparison using diff (where grep -o . gives an intermediate line for each character to be fed to line-wise diff), and just prints the differences (intermediate diff output lines starting with markers < or > omitted, then joining lines with tr).
If you have multiple lines in your input (which you did not mention in your question) then try something like this (where % is a character not contained in your input):
diff <(cat 1.txt | tr '\n' '%' | grep -o .) \
<(cat 2.txt | tr '\n' '%' | sed -e 's/%/%%/g' | grep -o .) \
| sed -n '/^[<>]/ {s/^..//;p}' | tr -d '\n' | tr '%' '\n'
This extends the single-line solution by adding line end markers (e.g. %) which diff is forced to include in its output by adding % on the left and %% on the right.
If both the files have always a single line in each, then below works:
perl -lne '$a=$_ if($.==1);print $1 if(/$a(.*)/ && $.==2)' 1.txt 2.txt
Tested Below:
> cat 1.txt
/home/folder_name/abc
> cat 2.txt
/home/folder_name/abc/pqr/xyz/mnp
> perl -lne '$a=$_ if($.==1);print $1 if(/$a(.*)/ && $.==2)' 1.txt 2.txt
/pqr/xyz/mnp
>

Easy way of selecting certain lines from a file in a certain order

I have a text file, with many lines. I also have a selected number of lines I want to print out, in certain order. Let's say, for example, "5, 3, 10, 6". In this order.
Is there some easy and "canonical" way of doing this? (with "standard" Linux tools, and bash)
When I tried the answers from this question
Bash tool to get nth line from a file
it always prints the lines in order they are in the file.
A one liner using sed:
for i in 5 3 10 6 ; do sed -n "${i}p" < ff; done
A rather efficient method if your file is not too large is to read it all in memory, in an array, one line per field using mapfile (this is a Bash ≥4 builtin):
mapfile -t array < file.txt
Then you can echo all the lines you want in any order, e.g.,
printf '%s\n' "${array[4]}" "${array[2]}" "${array[9]}" "${array[5]}"
to print the lines 5, 3, 10, 6. Now you'll feel it's a bit awkward that the array fields start with a 0 so that you have to offset your numbers. This can be easily cured with the -O option of mapfile:
mapfile -t -O 1 array < file.txt
this will start assigning to array at index 1, so that you can print your lines 5, 3, 10 and 6 as:
printf '%s\n' "${array[5]}" "${array[3]}" "${array[10]}" "${array[6]}"
Finally, you want to make a wrapper function for this:
printlines() {
local i
for i; do printf '%s\n' "${array[i]}"; done
}
so that you can just state:
printlines 5 3 10 6
And it's all pure Bash, no external tools!
As #glennjackmann suggests in the comments you can make the helper function also take care of reading the file (passed as argument):
printlinesof() {
# $1 is filename
# $2,... are the lines to print
local i array
mapfile -t -O 1 array < "$1" || return 1
shift
for i; do printf '%s\n' "${array[i]}"; done
}
Then you can use it as:
printlinesof file.txt 5 3 10 6
And if you also want to handle stdin:
printlinesof() {
# $1 is filename or - for stdin
# $2,... are the lines to print
local i array file=$1
[[ $file = - ]] && file=/dev/stdin
mapfile -t -O 1 array < "$file" || return 1
shift
for i; do printf '%s\n' "${array[i]}"; done
}
so that
printf '%s\n' {a..z} | printlinesof - 5 3 10 6
will also work.
Here is one way using awk:
awk -v s='5,3,10,6' 'BEGIN{split(s, a, ","); for (i=1; i<=length(a); i++) b[a[i]]=i}
b[NR]{data[NR]=$0} END{for (i=1; i<=length(a); i++) print data[a[i]]}' file
Testing:
cat file
Line 1
Line 2
Line 3
Line 4
Line 5
Line 6
Line 7
Line 8
Line 9
Line 10
Line 11
Line 12
awk -v s='5,3,10,6' 'BEGIN{split(s, a, ","); for (i=1; i<=length(a); i++) b[a[i]]=i}
b[NR]{data[NR]=$0} END{for (i=1; i<=length(a); i++) print data[a[i]]}' file
Line 5
Line 3
Line 10
Line 6
First, generate a sed expression that would print the lines with a number at the beginning that you can later use to sort the output:
#!/bin/bash
lines=(5 3 10 6)
sed=''
i=0
for line in "${lines[#]}" ; do
sed+="${line}s/^/$((i++)) /p;"
done
for i in {a..z} ; do echo $i ; done \
| sed -n "$sed" \
| sort -n \
| cut -d' ' -f2-
I's probably use Perl, though:
for c in {a..z} ; do echo $c ; done \
| perl -e 'undef #lines{#ARGV};
while (<STDIN>) {
$lines{$.} = $_ if exists $lines{$.};
}
print #lines{#ARGV};
' 5 3 10 6
You can also use Perl instead of hacking with sed in the first solution:
for c in {a..z} ; do echo $c ; done \
| perl -e ' %lines = map { $ARGV[$_], ++$i } 0 .. $#ARGV;
while (<STDIN>) {
print "$lines{$.} $_" if exists $lines{$.};
}
' 5 3 10 6 | sort -n | cut -d' ' -f2-
l=(5 3 10 6)
printf "%s\n" {a..z} |
sed -n "$(printf "%d{=;p};" "${l[#]}")" |
paste - - | {
while IFS=$'\t' read -r nr text; do
line[nr]=$text
done
for n in "${l[#]}"; do
echo "${line[n]}"
done
}
You can use the nl trick: number the lines in the input and join the output with the list of actual line numbers. Additional sorts are needed to make the join possible as it needs sorted input (so the nl trick is used once more the number the expected lines):
#! /bin/bash
LINES=(5 3 10 6)
lines=$( IFS=$'\n' ; echo "${LINES[*]}" | nl )
for c in {a..z} ; do
echo $c
done | nl \
| grep -E '^\s*('"$( IFS='|' ; echo "${LINES[*]}")"')\s' \
| join -12 -21 <(echo "$lines" | sort -k2n) - \
| sort -k2n \
| cut -d' ' -f3-

Merge two text files specific position

I need to merge two files with a Bash script.
File_1.txt
TEXT01 TEXT02 TEXT03 TEXT04
TEXT05 TEXT06 TEXT07 TEXT08
TEXT09 TEXT10 TEXT11 TEXT12
File_2.txt
1993.0
1994.0
1995.0
Result.txt
TEXT01 TEXT02 1993.0 TEXT03 TEXT04
TEXT05 TEXT06 1994.0 TEXT07 TEXT08
TEXT09 TEXT10 1995.0 TEXT11 TEXT12
File_2.txt need to be merged at this specific position. I have tried different solutions with multiple do while loops, but they have not been working so far..
awk '{
getline s3 < "file1"
printf "%s %s %s ",$1,$2,s3
for(i=3;i<=NF;i++){
printf "%s ",$i
}
print ""
}END{close(s3)}' file
output
# more file
TEXT01 TEXT02 TEXT03 TEXT04
TEXT05 TEXT06 TEXT07 TEXT08
TEXT09 TEXT10 TEXT11 TEXT12
$ more file1
1993.0
1994.0
1995.0
$ ./shell.sh
TEXT01 TEXT02 1993.0 TEXT03 TEXT04
TEXT05 TEXT06 1994.0 TEXT07 TEXT08
TEXT09 TEXT10 1995.0 TEXT11 TEXT12
Why, use cut and paste, of course! Give this a try:
paste -d" " <(cut -d" " -f 1-2 File_1.txt) File_2.txt <(cut -d" " -f 3-4 File_1.txt)
This was inspirated by Dennis Williamson's answer so if you like it give there a +1 too!
paste test1.txt test2.txt | awk '{print $1,$2,$5,$3,$4}'
This is a solution without awk.
The interesting is how to use the file descriptors in bash.
#!/bin/sh
exec 5<test2.txt # open file descriptor 5
cat test1.txt | while read ln
do
read ln2 <&5
#change this three lines as you wish:
echo -n "$(echo $ln | cut -d ' ' -f 1-2) "
echo -n "$ln2 "
echo $ln | cut -d ' ' -f 3-4
done
exec 5>&- # Close fd 5
Since the question was tagged with 'sed', here's a variant of Vereb's answer using sed instead of awk:
paste File_1.txt File_2.txt | sed -r 's/( [^ ]* [^ ]*)\t(.*)/ \2\1/'
Or in pure sed ... :D
sed -r '/ /{H;d};G;s/^([^\n]*)\n*([^ ]* [^ ]*)/\2 \1/;P;s/^[^\n]*\n//;x;d' File_1.txt File_2.txt
Using perl, give file1 and file2 as arguments to:
#/usr/local/bin/perl
open(TXT2, pop(#ARGV));
while (<>) {
chop($m = <TXT2>);
s/^((\w+\s+){2})/$1$m /;
print;
}

Resources