sed for a string in only 1 line - linux

What I want to do here is locate any file that contains a specific string in a specific line, and remove said line, not just the string.
What I have is something along the lines of this:
find / -type f -name '*.foo' -exec sed '1/stringtodetect/d' {} \;
However this will remove everything BETWEEN line 1 and the string. given that sed argument. (sed '1,/stringtodetect/d' "$file")
Lets say I have a .php file, and I'm looking for the string 'gotcha'.
I only want to edit the file if it has the string in the FIRST line of the file, like so:
gotcha with this.
gotcha
useful text
more text
dont delete me
If I ran the script, I'd want the contents of the same file to appear as such:
List item
List item
dont delete me
Any tips?

You are using the following range address for the delete command:
1,/stringtodelete/
This means all lines from line 1 until the first occurrence of stringtodelete.
Furthermore, you need not (and should not!) iterate over the results from find. find has the -exec option for that. It executes a command for each file which has been found, passing the filename as an argument.
It should be:
find / -type f -name '*.foo' -exec sed '/stringtodetect/d' {} \;
Test the command first. Once you are sure it works, use sed -i to modify the files in place. If you want a backup you can use sed -i.backup (for example). To remove the backups once you are sure you can use find again:
find / -type -name '*.foo.backup' -delete

You need a sed script that will skip any line by number that is not the one you are interested in, and only for the line you are interested in delete the line if it matches.
sed -e1bt -eb -e:t -e/string/d < $file
-e1bt = for line 1, branch to label "t"
-eb = branch unconditionally to the end of the script (at which point it will print the line).
-e:t = define label "t"
-e/string/d = delete the line if it contains "string" - this instruction will only be reached if the unconditional branch to the end of the script was NOT taken, i.e. if the line number branch WAS taken.

Could it be that it is matching parts of a string.
If you try exact match, it might help.
Also, remove the 1, at the beginning or replace it with 0,
sed '/<stringtodetect>/d' "$file";

sed is for simple substitutions on individual lines, that is all. For anything else just use awk for simplicity, clarity, robustness, portability and all of the other desirable attributes of software:
awk '!(NR==1 && /stringtodetect/)' file

You were close. I think what you're looking for is: sed '1{/gotcha/d;}'

Related

How to rename fasta header based on filename in multiple files?

I have a directory with multiple fasta file named as followed:
BC-1_bin_1_genes.faa
BC-1_bin_2_genes.faa
BC-1_bin_3_genes.faa
BC-1_bin_4_genes.faa
etc. (about 200 individual files)
The fasta header look like this:
>BC-1_k127_3926653_6 # 4457 # 5341 # -1 # ID=2_6;partial=01;start_type=Edge;rbs_motif=None;rbs_spacer=None;gc_cont=0.697
I now want to add the filename to the header since I want to annotate the sequences for each file.I tried the following:
for file in *.faa;
do
sed -i "s/>.*/${file%%.*}/" "$file" ;
done
It worked partially but it removed the ">" from the header which is essential for the fasta file. I tried to modify the "${file%%.*}" part to keep the carrot but it always called me out on bad substitutions.
I also tried this:
awk '/>/{sub(">","&"FILENAME"_");sub(/\.faa/,x)}1' *.faa
This worked in theory but only printed everything on my terminal rather than changing it in the respective files.
Could someone assist with this?
It's not clear whether you want to replace the earlier header, or add to it. Both scenarios are easy to do. Don't replace text you don't want to replace.
for file in ./*.faa;
do
sed -i "s/^>.*/>${file%%.*}/" "$file"
done
will replace the header, but include a leading > in the replacement, effectively preserving it; and
for file in ./*.faa;
do
sed -i "s/^>.*/&${file%%.*}/" "$file"
done
will append the file name at the end of the header (& in the replacement string evaluates to the string we are replacing, again effectively preserving it).
For another variation, try
for file in *.faa;
do
sed -i "/^>/s/\$/ ${file%%.*}/" "$file"
done
which says on lines which match the regex ^>, replace the empty string at the end of the line $ with the file name.
Of course, your Awk script could easily be fixed, too. Standard Awk does not have an option to parallel the -i "in-place" option of sed, but you can easily use a temporary file:
for file in ./*.faa;
do
awk '/>/{ $0 = $0 " " FILENAME);sub(/\.faa/,"")}1' "$file" >"$file.tmp" &&
mv "$file.tmp" "$file"
done
GNU Awk also has an -i inplace extension which you could simply add to the options of your existing script if you have GNU Awk.
Since FASTA files typically contain multiple headers, adding to the header rather than replacing all headers in a file with the same string seems more useful, so I changed your Awk script to do that instead.
For what it's worth, the name of the character ^ is caret (carrot is 🥕). The character > is called greater than or right angle bracket, or right broket or sometimes just wedge.
You just need to detect the pattern to replace and use regex to implement it:
fasta_helper.sh
location=$1
for file in $location/*.faa
do
full_filename=${file##*/}
filename="${full_filename%.*}"
#scape special chars
filename=$(echo $filename | sed 's_/_\\/_g')
echo "adding file name: $filename to: $full_filename"
sed -i -E "s/^[^#]+/>$filename /" $location/$full_filename
done
usage:
Just pass the folder with fasta files:
bash fasta_helper.sh /foo/bar
test:
lectures
Regex: matching up to the first occurrence of a character
Extract filename and extension in Bash
https://unix.stackexchange.com/questions/78625/using-sed-to-find-and-replace-complex-string-preferrably-with-regex
Locating your files
Suggesting to first identify your files with find command or ls command.
find . -type f -name "*.faa" -printf "%f\n"
A find command to print only file with filenames extension .faa. Including sub directories to current directory.
ls -1 "*.faa"
An ls command to print files and directories with extension .faa. In current directory.
Processing your files
Once you have the correct files list, iterate over the list and apply sed command.
for fileName in $(find . -type f -name "*.faa" -printf "%f\n"); do
stripedFileName=${fileName/.*/} # strip extension .faa
sed -i "1s|\$| $stripedFileName|" "fileName" # append value of stripedFileName at end of line 1
done

Replace spaces in all files in a directory with underscores

I have found some similar questions here but not this specific one and I do not want to break all my files. I have a list of files and I simply need to replace all spaces with underscores. I know this is a sed command but I am not sure how to generically apply this to every file.
I do not want to rename the files, just modify them in place.
Edit: To clarify, just in case it's not clear, I only want to replace whitespace within the files, file names should not be changed.
find . -type f -exec sed -i -e 's/ /_/g' {} \;
find grabs all items in the directory (and subdirectories) that are files, and passes those filenames as arguments to the sed command using the {} \; notation. The sed command it appears you already understand.
if you only want to search the current directory, and ignore subdirectories, you can use
find . -maxdepth 1 -type f -exec sed -i -e 's/ /_/g' {} \;
This is a 2 part problem. Step 1 is providing the proper sed command, 2 is providing the proper command to replace all files in a given directory.
Substitution in sed commands follows the form s/ItemToReplace/ItemToReplaceWith/pattern, where s stands for the substitution and pattern stands for how the operation should take place. According to this super user post, in order to match whitespace characters you must use either \s or [[:space:]] in your sed command. The difference being the later is for POSIX compliance. Lastly you need to specify a global operation which is simply /g at the end. This simply replaces all spaces in a file with underscores.
Substitution in sed commands follows the form s/ItemToReplace/ItemToReplaceWith/pattern, where s stands for the substitution and pattern stands for how the operation should take place. According to this super user post, in order to match whitespace characters you must use either just a space in your sed command, \s, or [[:space:]]. The difference being the last 2 are for whitespace catching (tabs and spaces), with the last needed for POSIX compliance. Lastly you need to specify a global operation which is simply /g at the end.
Therefore, your sed command is
sed s/ /_/g FileNameHere
However this only accomplishes half of your task. You also need to be able to do this for every file within a directory. Unfortunately, wildcards won't save us in the sed command, as * > * would be ambiguous. Your only solution is to iterate through each file and overwrite them individually. For loops by default should come equipped with file iteration syntax, and when used with wildcards expands out to all files in a directory. However sed's used in this manner appear to completely lose output when redirecting to a file. To correct this, you must specify sed with the -i flag so it will edit its files. Whatever item you pass after the -i flag will be used to create a backup of the old files. If no extension is passed (-i '' for instance), no backup will be created.
Therefore the final command should simply be
for i in *;do sed -i '' 's/ /_/g' $i;done
Which looks for all files in your current directory and echos the sed output to all files (Directories do get listed but no action occurs with them).
Well... since I was trying to get something running I found a method that worked for me:
for file in `ls`; do sed -i 's/ /_/g' $file; done

Extract Last Number from Filename in Bash

I have a lot of files to rename. Nearly all of those files are pictures.
The Source Filenames are something like:
DSC08828.JPG => 08828.JPG
20130412_0001.JPG => 0001.JPG
0002.JPG => 0002.JPG
IMG0047.jpg => 0047.jpg
DSC08828_1.JPG => Is a duplicate should be ignored
...
DSC08828_9.JPG => Is a duplicate should be ignored
All I want to do is to get the last Number followed by the file extension in a way that is as fast as possible (as we are talking about nearly 600.000 Pictures)
So I want to get the String from the first occurence of at least two digits from the right after the dot until the first non number character. If there is only one digit from the right, the file should be ignored.
Here's a method using a sed which may improve performance:
ls *.{JPG,jpg} | \
sed '
/_[1-9]*\./d; # first drop any line that appears to be a duplicate
/^[0-9]*\./d; # drop any line that does not need to be renamed
s/\(.*\)/\1 \1/; # save the original filename by duplicating the pattern space
s/ .*_/ /; # remove any leading characters followed by and including _ in the new filename
s/ [A-Z]*/ /; # remove any leading capital letters from the new filename
s/^/mv -i /; # finally insert mv command at the beginning of the line
'
When you're satisfied with the commands, feed to sh.
Input:
0002.JPG
20130412_0001.JPG
DSC08828.JPG
DSC08828_1.JPG
DSC08828_9.JPG
IMG0047.jpg
Output:
mv -i 20130412_0001.JPG 0001.JPG
mv -i DSC08828.JPG 08828.JPG
mv -i IMG0047.jpg 0047.jpg
for x in ./*.JPG ./*.jpg; do
y=$(echo "$x"|sed '/[^0-9]//g');
echo "$x" "$y";
done
While I'm not giving you the final answer on the plate, this should get you started and illustrate the technique how to approach the tasks you described.
Depending on what you want to do with the files afterwards, you could also combine find and grep, such as find . -type f | grep -v '_[0-9]\.' to filter all files containing _ followed by one digit, followed by one dot (not tested, escaping might be necessary). -v is used to negate the matches filtered by grep.
Since in your post you told you want to rename filter AND provided an example where you filter some files, I'm guessing you'll need both: first, filter the files you don't want, and then rename the filtered ones in a for loop.
sed -nr 's%^.*[^0-9]([0-9]{2,}\.[^.]+)$%\1%p' < <(find ./ -type f -iname '*.JPG')
SED dramatically faster than BASH in regexp processing, so use it instead of =~ whenever possible.

How to remove multiple lines in multiple files on Linux using bash

I am trying to remove 2 lines from all my Javascript files on my Linux shared hosting. I wanted to do this without writing a script as I know this should be possible with sed. My current attempt looks like this:
find . -name "*.js" | xargs sed -i ";var
O0l='=sTKpUG"
The second line is actually longer than this but is malicious code so I have not included it here. As you guessed my server has been hacked so I need to clean up all these JavaScript files.
I forgot to mention that the output I am getting at the moment is:
sed: -e expression #1, char 4: expected newer version of sed
The 2 lines are just as follows consecutively:
;var
O0l='=sTKpUG
except that the second line is longer, but the rest of the second line should not influence the command.
He meant removing two adjacent lines.
you can do something like this, remember to backup your files.
find . -name "*.js" | xargs sed -i -e "/^;var/N;/^;var\nO0l='=sTKpUG/d"
Since sed processes input file line by line, it does not store the newline '\n' character in its buffer, so we need to tell it by using flag /N to append the next line, with newline character.
/^;var/N;
Then we do our pattern searching and deleting.
/^;var\nO0l='=sTKpUG/d
It really isn't clear yet what the two lines look like, and it isn't clear if they are adjacent to each other in the JavaScript, so we'll assume not. However, the answer is likely to be:
find . -name "*.js" |
xargs sed -i -e '/^distinctive-pattern1$/d' -e '/^alternative-pattern-2a$/d'
There are other ways of writing the sed script using a single command string; I prefer to use separate arguments for separate operations (it makes the script clearer).
Clearly, if you need to keep some of the information on one of the lines, you can use a search pattern adjusted as appropriate, and then do a substitute s/short-pattern// instead of d to remove the short section that must be removed. Similarly with the long line if that's relevant.

Remove line feed(s) from start of specific files

I need to remove any occurrences of line feeds (carriage returns on a mac) from the very start of all files with the .php or .html extension. There are no other characters between the line feeds like spaces or anything.
So for example (using /lf as an example of line feed):
/lf
/lf
<!doctype html>
or
/lf
<!doctype html>
should be reduced down to:
<!doctype html>
One way of removing line feeds I've found is:
tr -d '\012'
But I have no idea how to target this at specific files, let alone only the first few lines.
So I've got the following:
find . \( -name "*.php" -or -name "*.html" \) | xargs grep -l "\012" | xargs sed -i -e "s/\012//g"
But this won't target only the first few lines, and I'm not entirely sure if it correctly targets line feeds either.
So, anyone got any bright ideas?
Try:
sed -i '/./,$\!d' filename
or even from find:
find . \( -name "*.php" -or -name "*.html" \) -exec sed -i '/./,$\!d' {} \;
EDIT:
The \ before the !d may not be needed, in my shell I need to escape it because csh keeps thinking I'm referring to a previous event via the ! symbol.
EDIT 2:
So the /./,$\!d, bit, it looks like gibberish but this is what's happening.
There are 2 addresses being defined here, the first is the regex . which is anything that isn't a blank line. Thus the first address is the first non-blank line matched by /./.
Then we have the second address, separated by the ,, and it's simply $, the end of the file. So the region we've defined by our 2 addresses is the first non-blank line all the way to the end of the file.
We're going to use sed's delete function here, which is denoted by the last d in the script. However, by using d, we'd be deleting everything starting from the first non-blank line to the end of the file.
Lastly, because we'd be deleting the very thing that we want, we use a ! right in front of the d command to tell sed, "ok, do exactly the opposite of what I'm telling you to do instead". Thus, instead of deleting everything starting from the first non-blank line to the end of the file, we're doing the complete opposite, preserving the first non-blank line to the end of the file, which has the effect of deleting all of the blank lines at the beginning of the file.
There's probably some way to do this using the p (print) command, which is sort of like the opposite of delete, but doesn't really behave that way. I'm sure there's some way to do this using p or !p.
Perl is good for this type of processing if you have that installed. You could do a little "do .. until" loop that exits once it finds a line with non-whitespace characters. Off the top of my head:
do {
s/^\s$//;
} until ( /^\S/ );
(But verify those regular expressions do what you want them to first!)
Use:
find /path/to/root/directory -type f -exec tr -d '\012' {} \;
where /path/to/root/directory is the top-level path where looked for all files to remove all occurrences.
If you know that the Linefeeds are only in the first, say, 10 lines, then you can change the SED command so that it only operates on the first ten lines. That's the 1,10 below.
xargs sed -i -e "1,10s/\012//g"

Resources