display specific lines in all files in a directory in linux - linux

I have 200 text files in folder F. I want to see lines 2-4 of all files. I have tried something like:
$sed -n '2,5'p *.txt
but it only reads the first file. Can anybody please help?
Furthermore, I might need to send these lines to a new file, something like:
$sed -n '2,5'p *.txt>path
My knowledge of linux is basic, so if you have a totally different solution, please be more specific.

awk 'FNR>1&&FNR<5' *.txt > result.txt

This might work for you (GNU sed):
sed -ns '2,4p' *.txt > results.txt
If you just want to capture the results:
sed -ns '2,4w results.txt' *.txt
Another way to see and capture the results:
sed -ns '2,4!b;p;w results.txt' *.txt
See here for the s invocation option.

Related

Linux command to replace set of lines for a group of files under a directory

I need to replace first 4 header lines of only selected 250 erlang files (with extension .erl), but there are 400 erlang files in total in the directory+subdirectories, I need to avoid modifying the files which doesn't need the change.
I've the list of file names that are to be modified, but don't know how to make my linux command to make use of them.
sed -i '1s#.*#%% This Source Code Form is subject to the terms of the Mozilla Public#' *.erl
sed -i '2s#.*#%% License, v. 2.0. If a copy of the MPL was not distributed with this file,#' *.erl
sed -i '3s#.*#%% You can obtain one at http://mozilla.org/MPL/2.0/.#' *.erl
sed -i '4s#.*##' *.erl
in the above commands instead of passing *.erl I want to pass those list of file names which I need to modify, doing that one by one will take me more than 3 days to complete it.
Is there any way to do this?
Iterate over the shortlisted file names using awk and use xargs to execute the sed. You can execute multiple sed commands to a file using -e option.
awk '{print $1}' your_shortlisted_file_lists | xargs sed -i -e first_sed -e second_sed $1
xargs gets the file name from awk in a $1 variable.
Try this:
< file_list.txt xargs -1 sed -i -e 'first_cmd' -e 'second_cmd' ...
Not answering your question but a suggestion for improvement. Four sed commands for replacing header is inefficient. I would instead write the new header into a file and do the following
sed -i -e '1,3d' -e '4{r header' -e 'd}' file
will replace the first four lines of the file with header.
Another concern with your current s### approach is you have to watch for special chars \, & and your delimiter # in the text you are replacing.
You can apply the sed c (for change) command to each file of your list :
while read file; do
sed -i '1,4 c\
%% This Source Code Form is subject to the terms of the Mozilla Public\
%% License, v. 2.0. If a copy of the MPL was not distributed with this file,\
%% You can obtain one at http://mozilla.org/MPL/2.0/.\
' "$file"
done < filelist
Let's say you have a file called file_list.txt with all file names as content:
file1.txt
file2.txt
file3.txt
file4.txt
You can simply read all lines into a variable (here: files) and then iterate through each one:
files=`cat file_list.txt`
for file in $files; do
echo "do something with $file"
done

How to remove a special character in a string in a file using linux commands

I need to remove the character : from a file. Ex: I have numbers in the following format:
b3:07:4d
I want them to be like:
b3074d
I am using the following command:
grep ':' source.txt | sed -e 's/://' > des.txt
I am new to Linux. The file is quite big & I want to make sure I'm using the write command.
You can do without the grep:
sed -e 's/://g' source.txt > des.txt
The -i option edits the file in place.
sed -i 's/://' source.txt
the first part isn't right as it'll completely omit lines which don't contain :
below is untested but should be right. The g at end of the regex is for global, means it should get them all.
sed -e 's/://g' source.txt > out.txt
updated to better syntax from Jon Lin's answer but you still want the /g I would think

Find and replace in shell scripting

Is it possible to search in a file using shell and then replace a value? When I install a service I would like to be able to search out a variable in a config file and then replace/insert my own settings in that value.
Sure, you can do this using sed or awk. sed example:
sed -i 's/Andrew/James/g' /home/oleksandr/names.txt
You can use sed to perform search/replace. I usually do this from a bash shell script, and move the original file containing values to be substituted to a new name, and run sed writing the output to my original file name like this:
#!/bin/bash
mv myfile.txt myfile.txt.in
sed -e 's/PatternToBeReplaced/Replacement/g' myfile.txt.in > myfile.txt.
If you don't specify an output, the replacement will go to stdout.
sed -i 's/variable/replacement/g' *.conf
You can use sed to do this:
sed -i 's/toreplace/yoursetting/' configfile
sed is probably available on every unix like system out there. If you want to replace more than one occurence you can add a g to the s-command:
sed -i 's/toreplace/yoursetting/g' configfile
Be careful since this can completely destroy your configfile if you don't specify your toreplace-value correctly. sed also supports regular expressions in searching and replacing.
Look at the UNIX power tools awk, sed, grep and in-place edit of files with Perl.
filepath="/var/start/system/dir1"
searchstring="test"
replacestring="test01"
i=0;
for file in $(grep -l -R $searchstring $filepath)
do
cp $file $file.bak
sed -e "s/$searchstring/$replacestring/ig" $file > tempfile.tmp
mv tempfile.tmp $file
let i++;
echo "Modified: " $file
done
Generally a tool like awk or sed are used for this.
$ sed -i 's/ugly/beautiful/g' /home/bruno/old-friends/sue.txt

extracting data from a file and appending to the target file in linux using sed

I want to extract some data from files minimumThickness*.k and want to put it in the file results.txt.
The file mimimumThickness*.k has only double values in the first line.
The files minimumThickness.k are a series of files from 1 to hundred like
mimimumThickness1.k
mimimumThickness2.k
mimimumThickness3.k
. . .
. . .
mimimumThickness100.k
I used to following command to do it but was not successful.
sed -n '/^[0-9.]*$/w results.txt' minimumThickness*.k
I could also use
for loop of i over 1 to hundred
thickness=´cat minimumThickness$i.k | {print $1} ' | bc`
echo $thickness
thickess >> results.txt
kindly tell me about the problem with sed or suggest me better way of using sed. It would appreciate any elegent method.
best regards.
[0-9.]* will match anything and so you may not be seeing expected result. You can try [0-9]*\.[0-9]* to get doubles (with some modifications).
If you only need the first line of each file:
head -n 1 minimumThickness*.k > results.txt
This might work for you (GNU sed):
sed -sn '1w results.txt' minimumThickness*.k
or
head -qn1 minimumThickness*.k > results.txt

Grep and inserting a string

I have a text file with a bunch of file paths such as -
web/index.erb
web/contact.erb
...
etc. I need to append before the
</head>
a line of code, to every single file, I'm trying to figure out how to do this without opening each file of course. I've heard sed, but I've never used it before..was hoping there would be a grep command maybe?
Thanks
xargs can be used to apply sed (or any other command) to each filename or argument in a list. So combining that with Rom1's answer gives:
xargs sed -i 's/<\/html>/myline\n<\/html>/g' < fileslist.txt
while read f ; do
sed -i '/<\/head>/i*iamthelineofcode*' "$f"
done <iamthefileoffiles.list
or
sed -i '/<\/head>/i*iamthelineofcode*' $(cat iamthefileoffiles.list)

Resources