I have the following file
vol-12345678 gp2
vol-89dfg58g
VOLUMES 2016-03-17T22:03:08.374Z False 100 16 snap-7073d522 in-use vol-4568gds4 gp2
ATTACHMENTS 2016-03-17T22:03:08.000Z True /dev/sda1 i-181ed33c attached vol-7ea1c83f
etc.
etc.
I want to extract all instances of 'vol-********' and output it to a file (without the other contents) resulting in a file of:
vol-12345678
vol-34556767
vol-34534sdf
...
This is a relatively small file so I could do it manually, but I have another file with 200+ cases. Any idea how to use this using GREP or SED or AWK? Thanks!
This should do it:
grep -o 'vol-[[:alnum:]]*' input.data | sort -u > output.data
UPDATE
Command:
sed -n 's/.*\b\(vol-[[:alnum:]]*\).*/\1/'p test2
Output:
vol-12345678
vol-89dfg58g
vol-4568gds4
vol-7ea1c83f
Flags:
n : Suppress automatic printing of pattern space.
p : Print out the pattern space
Pattern:
Look for 'vol-[alphanumeric]s'
Substitute it and print the first match with \1
More details Sed
Related
I am using below command to retrieve HDFS quota but I dont want the fancy output. Instead I need this output to be stored in a comma or tab separated format. By default it is not a tab separated.. Can anyone suggest this?
Command:
hdfs dfs -count -q -h -v /path/to/directory
Output is like this:
none inf 250 G 114.9 G 518 2.8 K 45.0 G /new/directory/X
Expected Output:
none,inf,250 G,114.9 G,518,2.8 K,45.0 G,/new/directory/X
How about using sed. They key thing is to identify a unique string to identify the separator in the hdfs output. That could be tab since you said they are tab separated. But, the sample output you posted used spaces.
Once you decide on a unique string use sed to search for that unique string and replace it with a comma. It looks like two or more spaces are unique to field separation in the hdfs output in all cases but the start of the line and the path. Perhaps you can accept a leading comma and do a second pass of sed for the path.
This Stack Overflow question covers sed replacing consecutive spaces.
hdfs dfs -count -q -h -v /path/to/directory | sed -e "s/[[:space:]]\{2,\}/,/g" | sed -e "s/[[:space:]]\//,\//g"
The solution is even simpler if they are tabs.
hdfs | sed -e $'s/\t/,/g'
I need to extract the 5 to 11 characters from my fastq.gz data this data is just too large for running in R. So I was wondering if I can do it directly in Linux command line?
The fastq file looks like this:
#NB501399:67:HFKTCBGX5:1:11101:13202:1044 1:N:0:CTTGTA
GAGGTNACGGAGTGGGTGTGTGCAGGGCCTGGTGGGAATGGGGAGACCCGTGGACAGAGCTTGTTAGAGTGTCCTAGAGCCAGGGGGAACTCCAGGCAGGGCAAATTGGGCCCTGGATGTTGAGAAGCTGGGTAACAAGTACTGAGAGAAC
+
AAAAA#EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAAAEEEEEEEEEEEEEEEEAEEEEEEEEEEEEEEAEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAE6
#NB501399:67:HFKTCBGX5:1:11101:1109:1044 1:N:0:CTTGTA
TAGGCNACCTGGTGGTCCCCCGCTCCCGGGAGGTCACCATATTGATGCCGAACTTAGTGCGGACACCCGATCGGCATAGCGCACTACAGCCCAGAACTCCTGGACTCAAGCGATCCTCCAGCCTCAGCCTCCCGAGTAGCTGGGACTACAG
+
And I only want to extract the 5 to 11 character which located in sequence part (for the first one is TNACGG, for the second is CNACCT) and makes it a new txt file. Can I do that?
You can use GNU sed with zcat:
zcat fastq.gz | sed -n '2~5{s/.\{4\}\(.\{6\}\).*/\1/;p}'
-n means lines are not printed by default
2~5 means start with line 2, match every fifth line
when the "address" matches, the substitution remembers the fifth to tenth character in \1 and replaces the whole line with it, p prints the result
Another using zgrep and positive lookbehind:
$ zgrep -oP "(?<=^[ACTGN]{4})[ACTGN]{6}" foo.gz
TNACGG
CNACCT
Explained:
zgrep : man zgrep: search possibly compressed files for a regular expression
-o Print only the matched (non-empty) parts of a matching line
-P Interpret the pattern as a Perl-compatible regular expression (PCRE).
(?<=^[ACTGN]{4}) positive lookbehind
[ACTGN]{6} match 6 named characters that are preceeded by above
foo.gz my test file
$ zcat fastq.gz | awk '(NR%5)==2{print substr($0,5,6)}'
TNACGG
CNACCT
I'm trying to search through HDFS for parquet files and list them out. I'm using this, which works great. It looks through all of the subdirectories in /sources.works_dbo and gives me all the parquet files:
hdfs dfs -ls -R /sources/works_dbo | grep ".*\.parquet$"
However; I just want to return the first file it encounters per subdirectory, so that each subdirectory only appears on a single line in my output. Say I had this:
sources/works_dbo/test1/file1.parquet
sources/works_dbo/test1/file2.parquet
sources/works_dbo/test2/file3.parquet
When I run my command I expect the output to look like this:
sources/works_dbo/test1/file1.parquet
sources/works_dbo/test2/file3.parquet
... | awk '!seen[gensub(/[^/]+$/,"",1)]++' file
sources/works_dbo/test1/file1.parquet
sources/works_dbo/test2/file3.parquet
The above uses GNU awk for gensub(), with other awks you'd use a variable and sub():
awk '{path=$0; sub(/[^/]+$/,"",path)} !seen[path]++'
It will work for any mixture of any length of paths.
You can use sort -u (unique) with / as the delimiter and using the first three fields as key. The -s option ("stable") makes sure that the file retained is the first one encountered for each subdirectory.
For this input
sources/works_dbo/test1/file1.parquet
sources/works_dbo/test1/file2.parquet
sources/works_dbo/test2/file3.parquet
the result is
$ sort -s -t '/' -k 1,3 -u infile
sources/works_dbo/test1/file1.parquet
sources/works_dbo/test2/file3.parquet
If the subdirectories are of variable length, this awk solution may come in handy:
hdfs dfs -ls -R /sources/works_dbo | awk '
BEGIN{FS="/"; OFS="/";}
{file=$NF; // file name is always the last field
$NF=""; folder=$0; // chomp off the last field to cache folder
if (!(folder in seen_dirs)) // cache the first file per folder
seen_dirs[folder]=file;
}
END{
for (f in seen_dirs) // after we've processed all rows, print our cache
print f,seen_dirs[f];
}'
Using Perl:
hdfs dfs -ls -R /sources/works_dbo | grep '.*\.parquet$' | \
perl -MFile::Basename -nle 'print unless $h{ dirname($_) }++'
In the perl command above:
-M loads File::Basename module;
-n causes Perl to apply the expression passed via -e for each input line;
-l preserves the line terminator;
$_ is the default variable keeping the currently read line;
dirname($_) returns the directory part for the path specified by $_;
$h is a hash where keys are directory names, and values are integers 0, 1, 2 etc;
the line is printed to the standard output, unless the directory name is seen in the previous iterations, i.e. the hash value $h{ dirname($_) } is non-zero.
By the way, instead of piping the result of hdfs dfs -ls -R via grep, you can use the find command:
hdfs dfs -find /sources/works_dbo -name '*.parquet'
As an easy example, consider the following command:
$ sort file.txt
This will output the file's data in sorted order. How do I put that data right back into the same file? I want to update the file with the sorted results.
This is not the solution:
$ sort file.txt > file.txt
... as it will cause the file to come out blank. Is there a way to update this file without creating a temporary file?
Sure, I could do something like this:
sort file.txt > temp.txt; mv temp.txt file.txt
But I would rather keep the results in memory until processing is done, and then write them back to the same file. sort actually has a flag that will allow this to be possible:
sort file.txt -o file.txt
...but I'm looking for a solution that doesn't rely on the binary having a special flag to account for this, as not all are guaranteed to. Is there some kind of linux command that will hold the data until the processing is finished?
For sort, you can use the -o option.
For a more general solution, you can use sponge, from the moreutils package:
sort file.txt | sponge file.txt
As mentioned below, error handling here is tricky. You may end up with an empty file if something goes wrong in the steps before sponge.
This is a duplicate of this question, which discusses the solutions above: How do I execute any command editing its file (argument) "in place" using bash?
You can do it with sed (with its r command), and Process Substitution:
sed -ni r<(sort file) file
In this way, you're telling sed not to print the (original) lines (-n option) and to append the file generated by <(sort file).
The well known -i option is the one which does the trick.
Example
$ cat file
b
d
c
a
e
$ sed -ni r<(sort file) file
$ cat file
a
b
c
d
e
Try vim-way:
$ ex -s +'%!sort' -cxa file.txt
I want to search for a pattern "xxxx" in a file and delete 5 lines before this pattern and 6 lines after this match. How can i do this using Sed?
This might work for you (GNU sed):
sed ':a;N;s/\n/&/5;Ta;/xxxx/!{P;D};:b;N;s/\n/&/11;Tb;d' file
Keep a rolling window of 5 lines and on encountering the specified string add 6 more (11 in total) and delete.
N.B. This is a barebones solution and will most probably need tailoring to your specific needs. Questions such as: what if there are multiple string throughout the file? What if the string is within the first five lines or multiple strings are within five lines of each other etc etc etc.
Here's one way you could do it using awk. I assume that you also want to delete the line itself and that the file is small enough to fit into memory:
awk '{a[NR]=$0}/xxxx/{f=NR}END{for(i=1;i<=NR;++i)if(i<f-5||i>f+6)print a[i]}' file
Store every line into the array a. When the pattern /xxxx/ is matched, save the line number. After the whole file has been processed, loop through the array, only printing the lines you want to keep.
Alternatively, you can use grep to obtain the line number first:
grep -n 'xxxx' file | awk -F: 'NR==FNR{f=$1}NR<f-5||NR>f+6' - file
In both cases, the lines deleted will be surrounding the last line where the pattern is matched.
A third option would be to use grep to obtain the line number then use sed to delete the lines:
line=$(grep -nm1 'xxxx' file | cut -d: -f1)
sed "$((line-5)),$((line+6))d" file
In this case I've also added the -m switch so grep exits after finding the first match.
if you know, the line number (what is not difficult to obtain), you can use something like that:
filename="test"
start=`expr $curr_line - 5`
end=`expr $curr_line + 6`
sed "${start},${end}d" $filename (optionally sed -i)
of course, you have to remember about additional conditions like start shouldn't be less than 1 and end greater than number of lines in file.
Another - maybe more easy to follow - solution would be to use grep to find the keyword and the corresponding line:
grep -n 'KEYWORD' <file>
then use sed to get the line number only like this:
grep -n 'KEYWORD' <file> | sed 's/:.*//'
Now that you have the line number simply use sed like this:
sed -i "$(LINE_START),$(LINE_END) d" <file>
to remove lines before and/or after! With only the -i you will override the <file> (no backup).
A script example could be:
#!/bin/bash
KEYWORD=$1
LINES_BEFORE=$2
LINES_AFTER=$3
FILE=$4
LINE_NO=$(grep -n $KEYWORD $FILE | sed 's/:.*//' )
echo "Keyword found in line: $LINE_NO"
LINE_START=$(($LINE_NO-$LINES_BEFORE))
LINE_END=$(($LINE_NO+$LINES_AFTER))
echo "Deleting lines $LINE_START to $LINE_END!"
sed -i "$LINE_START,$LINE_END d" $FILE
Please note that this will work only if the keyword is found once! Adapt the script to your needs!