extracting data from a file and appending to the target file in linux using sed - linux

I want to extract some data from files minimumThickness*.k and want to put it in the file results.txt.
The file mimimumThickness*.k has only double values in the first line.
The files minimumThickness.k are a series of files from 1 to hundred like
mimimumThickness1.k
mimimumThickness2.k
mimimumThickness3.k
. . .
. . .
mimimumThickness100.k
I used to following command to do it but was not successful.
sed -n '/^[0-9.]*$/w results.txt' minimumThickness*.k
I could also use
for loop of i over 1 to hundred
thickness=´cat minimumThickness$i.k | {print $1} ' | bc`
echo $thickness
thickess >> results.txt
kindly tell me about the problem with sed or suggest me better way of using sed. It would appreciate any elegent method.
best regards.

[0-9.]* will match anything and so you may not be seeing expected result. You can try [0-9]*\.[0-9]* to get doubles (with some modifications).

If you only need the first line of each file:
head -n 1 minimumThickness*.k > results.txt

This might work for you (GNU sed):
sed -sn '1w results.txt' minimumThickness*.k
or
head -qn1 minimumThickness*.k > results.txt

Related

display specific lines in all files in a directory in linux

I have 200 text files in folder F. I want to see lines 2-4 of all files. I have tried something like:
$sed -n '2,5'p *.txt
but it only reads the first file. Can anybody please help?
Furthermore, I might need to send these lines to a new file, something like:
$sed -n '2,5'p *.txt>path
My knowledge of linux is basic, so if you have a totally different solution, please be more specific.
awk 'FNR>1&&FNR<5' *.txt > result.txt
This might work for you (GNU sed):
sed -ns '2,4p' *.txt > results.txt
If you just want to capture the results:
sed -ns '2,4w results.txt' *.txt
Another way to see and capture the results:
sed -ns '2,4!b;p;w results.txt' *.txt
See here for the s invocation option.

How do i copy every line X line from a bunch of files to another file?

So my problem is as follows:
I have a bunch of files and i need only the information from a certain line in each of these files (the same line for all files).
Example:
I want the content of the line 10 from file example_1.dat~example_10.dat and then i want to save it on > test.dat
I tried using: head -n 5 example_*.dat > test.dat. But this gives me all the information from the top till the line i have chosen instead of just the line.
Please help.
$ for f in *.dat ; do sed -n '5p' $f >> test.dat ; done
This code will do the following:
Foreach file f in the directory that ends with .dat.
Use sed on the 5:th row in file and write to test.dat.
The ">>" will add the row at the bottom of the file if existing.
Use a combination of head and tail to zoom to the needed line. For example, head -n 5 file | tail -n 1
You can use a for loop to get it done over several files
for f in *.dat ; do head -n 5 $f | tail -n 1 >> test.dat ; done
PS: Don't forget to clean the test.dat file (> test.dat) before running the loop. Otherwise you'll get results from previous runs as well.
You can use sed or awk:
sed -n "5p"
awk "NR == 5"
This might work for you (GNU sed):
sed -sn '5wtest.dat' example_*.dat

parsing data in file

I have a text file with the following type of data in it below:
Example:
10212012115655_113L_-247R_247LRdiff_0;
10212012115657_114L_-246R_246LRdiff_0;
10212012115659_115L_-245R_245LRdiff_0;
10212012113951_319L_-41R_41LRdiff_2;
10212012115701_116L_-244R_244LRdiff_0;
10212012115703_117L_-243R_243LRdiff_0;
10212012115705_118L_-242R_242LRdiff_0;
10212012113947_317L_-43R_43LRdiff_0;
10212012114707_178L_-182R_182LRdiff_3;
10212012115027_278L_-82R_82LRdiff_1;
I would like to copy all the data lines that have
1) _2 _3 _1 at the end of it into another file along with
2) stripping out the semicolon at the end of it.
So at the end the data in the file will be
Example:
10212012113951_319L_-41R_41LRdiff_2
10212012114707_178L_-182R_182LRdiff_3
10212012115027_278L_-82R_82LRdiff_1
How can I go about doing this?
I'm using linux ubuntu 10.04 64bit
Thanks
Here's one way using sed:
sed -n 's/\(.*_[123]\);$/\1/p' file.txt > newfile.txt
Here's one way using grep:
grep -oP '.*_(1|2|3)(?=;$)' file.txt > newfile.txt
Contents of newfile.txt:
10212012113951_319L_-41R_41LRdiff_2
10212012114707_178L_-182R_182LRdiff_3
10212012115027_278L_-82R_82LRdiff_1
If the format is always the same and there is only a semi-colon at the very end of each line you can use grep to find the lines and then sed to replace the ;:
grep -P "_(1|2|3);$" your_file | sed 's/\(.*\);$/\1/' > your_new_file
The -P in the grep command tells it to use the Perl-regex interpreter for parsing. Alternatively, you could use egrep (if available).
here is the awk solution if at all you are interested:
awk '/_[321];$/{gsub(/;/,"");print}' your_file
tested below:
> awk '/_[321];$/{gsub(/;/,"");print}' temp
10212012113951_319L_-41R_41LRdiff_2
10212012114707_178L_-182R_182LRdiff_3
10212012115027_278L_-82R_82LRdiff_1
tr -c ";" "\n" > newfile
grep '*_[123]$' newfile > newfile
This should work. At first you translate all ; to \n and save result to destination file. Then use grep to match the lines only containing *_[123] at the end and save matching result to that file again that will replace all previous data. To mark at the end I used $.
Some examples using tr and grep in case you are not familiar to it.

How to remove a special character in a string in a file using linux commands

I need to remove the character : from a file. Ex: I have numbers in the following format:
b3:07:4d
I want them to be like:
b3074d
I am using the following command:
grep ':' source.txt | sed -e 's/://' > des.txt
I am new to Linux. The file is quite big & I want to make sure I'm using the write command.
You can do without the grep:
sed -e 's/://g' source.txt > des.txt
The -i option edits the file in place.
sed -i 's/://' source.txt
the first part isn't right as it'll completely omit lines which don't contain :
below is untested but should be right. The g at end of the regex is for global, means it should get them all.
sed -e 's/://g' source.txt > out.txt
updated to better syntax from Jon Lin's answer but you still want the /g I would think

Grep and inserting a string

I have a text file with a bunch of file paths such as -
web/index.erb
web/contact.erb
...
etc. I need to append before the
</head>
a line of code, to every single file, I'm trying to figure out how to do this without opening each file of course. I've heard sed, but I've never used it before..was hoping there would be a grep command maybe?
Thanks
xargs can be used to apply sed (or any other command) to each filename or argument in a list. So combining that with Rom1's answer gives:
xargs sed -i 's/<\/html>/myline\n<\/html>/g' < fileslist.txt
while read f ; do
sed -i '/<\/head>/i*iamthelineofcode*' "$f"
done <iamthefileoffiles.list
or
sed -i '/<\/head>/i*iamthelineofcode*' $(cat iamthefileoffiles.list)

Resources