sed command to add a line after some pattern - linux

I have a file (content below)
name = [
"victor",
"linda",
"harris",
"sandy"
]
Now how to i add a command (shell using sed or awk) and i need below output
name = [
"NEW INPUT HERE",
"victor",
"linda",
"harris",
"sandy"
]
I have tried multiple ways but not able to achieve it. Few of them what i tried
sed '2a'$'\n'' "NEW INPUT HERE", ' filename
I am able to get add it but not giving new line after my input

Some sed are finicky, but each of the following should work:
$ sed -e '1a\
"NEW INPUT HERE",
' input-file
$ sed -e $'1a\\\n "NEW INPUT HERE",\n' input-file # Bashism

Using sed
$ sed '2{h;s/[[:alpha:]][^"]*/NEW INPUT HERE/};2G' input_file
name = [
"NEW INPUT HERE",
"victor",
"linda",
"harris",
"sandy"
]

Works for me with GNU sed:
sed '2i\ "NEW INPUT HERE",'
or
sed '1a\ "NEW INPUT HERE",'

This will just use for the new line whatever indenting you already use for the existing 2nd line:
$ awk -v new='"NEW INPUT HERE",' 'NR==2{orig=$0; sub(/[^[:space:]].*/,""); print $0 new; $0=orig} 1' file
name = [
"NEW INPUT HERE",
"victor",
"linda",
"harris",
"sandy"
]

Another option is to match the [ and the next line.
Then capture the newline and leading spaces in group 1.
In the replacement use your text surrounded by double quotes and a backreference to group 1 to keep the indenting the same.
sed -E '/\[/{N;/(\n[[:blank:]]+)/{s//\1"NEW INPUT HERE",\1/}}' file
Output
name = [
"NEW INPUT HERE",
"victor",
"linda",
"harris",
"sandy"
]

awk -v str='"NEW INPUT HERE",' '
/name/{
print;
getline; # get first element (victor)
le=length($0)-length($1); # calculate first element ident
printf "%*s%s \n%s\n", le, " ", str, $0;
next
}1' file
name = [
"NEW INPUT HERE",
"victor",
"linda",
"harris",
"sandy"
]

This might work for you (GNU sed):
sed '2{h;s/\S.*/"NEW INPUT HERE",/p;g}' file
On line 2, make a copy of the line, substitute the required string starting where existing text starts indented, print the amended line and reinstate the original line.
Another solution using a regexp for the address line:
sed '/name = \[/{n;h;s/\S.*/"NEW INPUT HERE",/p;g}' file

Related

sed command with dynamic variable and spaces - not retaining spaces

I am trying to insert a variable value into file from Jenkinsfile using shell script in the section
Variable value is dynamic. I am using sed.
Sed is working fine but it is not retaining the white spaces that the variable have at the beginning.
ex:
The value of >> repoName is " somename"
stage('trying sed command') {
steps {
script {
sh """
#!/bin/bash -xel
repo='${repoName}'
echo "\$repo"
`sed -i "5i \$repo" filename`
cat ecr.tf
"""
}
}
}
current output:
names [
"xyz",
"ABC",
somename
"text"
]
Expected output:
names [
"xyz",
"ABC",
somename
"text"
]
How do i retain the spaces infront of the variable passing from sed
With
$ cat filename
names [
"xyz",
"ABC",
"text"
]
$ repo=somename
we can do:
sed -E "3s/^([[:blank:]]*).*/&\\n\\1${repo},/" filename
names [
"xyz",
"ABC",
somename,
"text"
]
That uses capturing parentheses to grab the indentation from the previous line.
if $repo might contain a value with slashes, you can tell the shell to escape them with this (eye-opening) expansion
repo='some/name'
sed -E "3s/^([[:blank:]]*).*/&\\n\\1${repo//\//\\\/},/" filename
names [
"xyz",
"ABC",
some/name,
"text"
]
I used 1 sed statement to add the content first to the file and then another sed statement for just adding spaces. This fixed my issue. All day i was trying to fit in one command did not work probably from Jenkins and shell usage. But using 2 sed commands as a workaround i was able to finish my task
The i command skips over spaces after the i command to find the text to insert. You can put the text on a new line, with a backslash before the newline, to have the initial whitespace preserved.
stage('trying sed command') {
steps {
script {
sh """
#!/bin/bash -xel
repo='${repoName}'
echo "\$repo"
`sed -i "5i \\\\\\
\$repo" filename`
cat ecr.tf
"""
}
}
}
I've tested this from a regular shell command line, I hope it will also work in the Jenkins recipe.

how to modify a text file that every line has same number of columns?

I've got a text file which includes several lines. Every line has words which are separated with a comma. The number of words in lines are not the same. I would like with the help of the awk command to make every line have same number of column. For example, if the text file is as follows:
word1, text, help, test
number, begin
last, line, line
I would like the output be as the following which every line has same size in column with an extra null word:
word1, text, help, test
number, begin, null, null
last, line, line, null
I tried the following code:
awk '{print $0,Null}' file.txt
$ awk 'BEGIN {OFS=FS=", "}
NR==FNR {max=max<NF?NF:max; next}
{for(i=NF+1;i<=max;i++) $i="null"}1' file{,}
first scan to find the max number of columns and fill the missing entries in the second round. If the first line contains all the columns (header perhaps), you can change to
$ awk 'BEGIN {OFS=FS=", "}
NR==1 {max=NF}
{for(i=NF+1;i<=max;i++) $i="null"}1' file
file{,} is expanded by bash to file file, a neat trick not to repeat the filename (and eliminates possible typos).
Passing twice through the input file, using getline on first pass:
awk '
BEGIN {
OFS=FS=", "
while(getline < ARGV[1]) {
if (NF > max) {max = NF}
}
close(ARGV[1])
}
{ for(i=NF+1; i<=max; i++) $i="null" } 1
' file.txt
Alternatively, keeping it simple by running awk twice...
#!/bin/bash
infile="file.txt"
maxfields=$(awk 'BEGIN {FS=", "} {if (NF > max) {max = NF}} END{print max}' "$infile" )
awk -v max="$maxfields" 'BEGIN {OFS=FS=", "} {for(i=NF+1;i<=max;i++) $i="null"} 1' "$infile"
Use these Perl one-liners. The first one goes through the file and finds the max number of fields to use. The second one goes through the file and prints the input fields, padded at the end by the null strings:
export num_fields=$( perl -F'/,\s+/' -lane 'print scalar #F;' in_file | sort -nr | head -n1 )
perl -F'/,\s+/' -lane 'print join ", ", map { defined $F[$_] ? $F[$_] : "null" } 0..( $ENV{num_fields} - 1 );' in_file > out_file
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
-F'/,\s+/' : Split into #F on comma with whitespace.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches

Adding the curly braces in begning and end of output

I have command that gives me following output:
'First' : 'abc',
'Second' :'xyz',
'Third' :'lmn'
Requirement here is to convert this output into valid json format.
So I replaced all ' to " using sed :
<command> | sed "s/'/\"/g"
"First" : "abc",
"Second" :"xyz",
"Third" :"lmn"
Now I also need to add { in the begining and end of the output how can I do that.
Any other thoughts are also welcome.
sed -z "s/[[:space:]]*'\([^']*\)'[[:space:]]*:[[:space:]]*'\([^']*\)'[[:space:]]*/"'"\1":"\2",/g; s/,$//; s/^/{/; s/$/}/'
First match the '<this>' : '<and this>'
Then convert each such sequences into "<this>":"<and this>",
Remove trailing comma.
Add { } in front of it.
-z is a GNU extension to parse it all as one line. Alternatively you could remove newlines before passing to sed.
|sed -e '1s/^/{/' -e "s/'/"/g" -e '$s/$/}/' does the work.

Struggling with awk

I have a curl command which returns me this kind of json formated text
[{"id": "nUsrLast//device control", "name": "nUsrLast", "access": "readonly", "value": "0", "visibility": "visible", "type": "integer"}]
I would like to get the value of the field value.
Can someone give me a simple awk or grep command to do so ?
Here is an awk
awk -v RS="," -F\" '/value/ {print $4}' file
0
How does it work?
Setting RS to , it breaks line to some like this:
awk -v RS="," '{$1=$1}1' file
[{"id": "nUsrLast//device control"
"name": "nUsrLast"
"access": "readonly"
"value": "0"
"visibility": "visible"
"type": "integer"}]
Then /value/ {print $4} prints field 4 separated by "
You could use grep with oP parameters,
$ echo '[{"id": "nUsrLast//device control", "name": "nUsrLast", "access": "readonly", "value": "0", "visibility": "visible", "type": "integer"}]' | grep -oP '(?<=\"value\": \")[^"]*'
0
From grep --help,
-P, --perl-regexp PATTERN is a Perl regular expression
-o, --only-matching show only the part of a line matching PATTERN
Pattern Explanation:
(?<=\"value\": \") Lookbehind is used to set or place the matching marker. In our case, regex engine places the matching marker just after to the string "value": ".
[^"]* Now it matches any character except " zero or more times. When a " is detected then the regex engine would stop it's matching operation.
This solution isn't grep or awk but chances are pretty good your system has perl on it, and this is the best solution thus far:
echo <your_json> | perl -e '<STDIN> =~ /\"value\"\s*:\s*\"(([^"]|\\")*)\"/; print $1;'
It handles the possibility of a failed request by ensuring there is a trailing " character. It also handles backslash-escaped " symbols in the string and whitespace between "value" and the colon character.
It does not handle JSON broken across multiple lines, but then none of the other solutions do, either.
\"value\"\s*:\s*\" Ensures that we're dealing with the correct field, then
(([^"]|\\")*) Captures the associated valid JSON string
\" Makes sure the string is properly terminated
Frankly, you're better off using a real JSON parser, though.

Using Unix Tools to Extract String Values

I wrote a small Perl script to extract all the values from a JSON formatted string for a given key name (shown below). So, if I set a command line switch for the Perl script to id, then it would return 1,2, and stringVal from the JSON example below. This script does the job, but I want to see how others would solve this same problem using other unix style tools such as awk, sed, or perl itself. Thanks
{
"id":"1",
"key2":"blah"
},
{
"id":"2",
"key9":"more blah"
},
{
"id":"stringVal",
"anotherKey":"even more blah"
}
Excerpt of perl script that extracts JSON values:
my #values;
while(<STDIN>) {
chomp;
s/\s+//g; # Remove spaces
s/"//g; # Remove quotes
push #values, /$opt_s:([\w]+),?/g; # $opt_s is a command line switch for the key to find
}
print join("\n",#values);
use JSON;
I would strongly suggest using the JSON module. It will parse your json input in one function (and back). It also offers an OOP interface.
gawk
gawk 'BEGIN{
FS=":"
printf "Enter key name: "
getline key < "-"
}
$0~key{
k=$2; getline ; v = $2
gsub("\"","",k)
gsub("\"","",v)
print k,v
}' file
output
$ ./shell.sh
Enter key name: id
1, blah
2, more blah
stringVal, even more blah
If you just want the id value,
$ key="id"
$ awk -vkey=$key -F":" '$0~key{gsub("\042|,","",$2);print $2}' file
1
2
stringVal
Here is a very rough Awk script to accomplish the task:
awk -v k=id -F: '/{|}/{next}{gsub(/^ +|,$/,"");gsub(/"/,"");if($1==k)print $2}' data
the -F: specifies ':' as the field separator
The -v k=id sets the key you're
searching for.
lines containing '{'
or '}' are skipped.
the first gsub
gets rid of leading whitespace and
trailing commas.
The second gsub gets
rid of double quotes.
Finally, if k
matches $1, $2 is printed.
data is the file containing your JSON
sed (provided that file is formatted as above, no more than one entry per line):
KEY=id;cat file|sed -n "s/^[[:space:]]*\"$KEY\":\"//p"|sed 's/".*$//'
Why are you parsing the string yourself when there are libraries to do this for you? json.org has JSON parsing and encoding libraries for practically every language you can think of (and probably a few that you haven't). In Perl:
use strict;
use warnings;
use JSON qw(from_json to_json);
# enable slurp mode
local $/;
my $string = <DATA>;
my $data = from_json($string);
use Data::Dumper;
print "the data was parsed as: " . Dumper($data);
__DATA__
[
{
"id":"1",
"key2":"blah"
},
{
"id":"2",
"key9":"more blah"
},
{
"id":"stringVal",
"anotherKey":"even more blah"
}
]
..produces the output (I added a top level array around the data so it would be parsed as one object):
the data was parsed as: $VAR1 = [
{
'key2' => 'blah',
'id' => '1'
},
{
'key9' => 'more blah',
'id' => '2'
},
{
'anotherKey' => 'even more blah',
'id' => 'stringVal'
}
];
If you don't mind seeing the quote and colon characters, I would simply use grep:
grep id file.json

Resources