How to cut a string after a specific character in unix - string

So I have this string:
$var=server#10.200.200.20:/home/some/directory/file
I just want to extract the directory address meaning I only want the bit after the ":" character and get:
/home/some/directory/file
thanks.
I need a generic command so the cut command wont work as the $var variable doesn't have a fixed length.

Using sed:
$ var=server#10.200.200.20:/home/some/directory/file
$ echo $var | sed 's/.*://'
/home/some/directory/file

This might work for you:
echo ${var#*:}
See Example 10-10. Pattern matching in parameter substitution

This will also do.
echo $var | cut -f2 -d":"

For completeness, using cut
cut -d : -f 2 <<< $var
And using only bash:
IFS=: read a b <<< $var ; echo $b

You don't say which shell you're using. If it's a POSIX-compatible one such as Bash, then parameter expansion can do what you want:
Parameter Expansion
...
${parameter#word}
Remove Smallest Prefix Pattern.
The word is expanded to produce a pattern. The parameter expansion then results in parameter, with the smallest portion of the prefix matched by the pattern deleted.
In other words, you can write
$var="${var#*:}"
which will remove anything matching *: from $var (i.e. everything up to and including the first :). If you want to match up to the last :, then you could use ## in place of #.
This is all assuming that the part to remove does not contain : (true for IPv4 addresses, but not for IPv6 addresses)

This should do the trick:
$ echo "$var" | awk -F':' '{print $NF}'
/home/some/directory/file

awk -F: '{print $2}' <<< $var

Related

How to strip stdout before logging into file? [duplicate]

Without using sed or awk, only cut, how do I get the last field when the number of fields are unknown or change with every line?
You could try something like this:
echo 'maps.google.com' | rev | cut -d'.' -f 1 | rev
Explanation
rev reverses "maps.google.com" to be moc.elgoog.spam
cut uses dot (ie '.') as the delimiter, and chooses the first field, which is moc
lastly, we reverse it again to get com
Use a parameter expansion. This is much more efficient than any kind of external command, cut (or grep) included.
data=foo,bar,baz,qux
last=${data##*,}
See BashFAQ #100 for an introduction to native string manipulation in bash.
It is not possible using just cut. Here is a way using grep:
grep -o '[^,]*$'
Replace the comma for other delimiters.
Explanation:
-o (--only-matching) only outputs the part of the input that matches the pattern (the default is to print the entire line if it contains a match).
[^,] is a character class that matches any character other than a comma.
* matches the preceding pattern zero or more time, so [^,]* matches zero or more non‑comma characters.
$ matches the end of the string.
Putting this together, the pattern matches zero or more non-comma characters at the end of the string.
When there are multiple possible matches, grep prefers the one that starts earliest. So the entire last field will be matched.
Full example:
If we have a file called data.csv containing
one,two,three
foo,bar
then grep -o '[^,]*$' < data.csv will output
three
bar
Without awk ?...
But it's so simple with awk:
echo 'maps.google.com' | awk -F. '{print $NF}'
AWK is a way more powerful tool to have in your pocket.
-F if for field separator
NF is the number of fields (also stands for the index of the last)
There are multiple ways. You may use this too.
echo "Your string here"| tr ' ' '\n' | tail -n1
> here
Obviously, the blank space input for tr command should be replaced with the delimiter you need.
This is the only solution possible for using nothing but cut:
echo "s.t.r.i.n.g." | cut -d'.' -f2-
[repeat_following_part_forever_or_until_out_of_memory:] | cut -d'.' -f2-
Using this solution, the number of fields can indeed be unknown and vary from time to time. However as line length must not exceed LINE_MAX characters or fields, including the new-line character, then an arbitrary number of fields can never be part as a real condition of this solution.
Yes, a very silly solution but the only one that meets the criterias I think.
If your input string doesn't contain forward slashes then you can use basename and a subshell:
$ basename "$(echo 'maps.google.com' | tr '.' '/')"
This doesn't use sed or awk but it also doesn't use cut either, so I'm not quite sure if it qualifies as an answer to the question as its worded.
This doesn't work well if processing input strings that can contain forward slashes. A workaround for that situation would be to replace forward slash with some other character that you know isn't part of a valid input string. For example, the pipe (|) character is also not allowed in filenames, so this would work:
$ basename "$(echo 'maps.google.com/some/url/things' | tr '/' '|' | tr '.' '/')" | tr '|' '/'
the following implements A friend's suggestion
#!/bin/bash
rcut(){
nu="$( echo $1 | cut -d"$DELIM" -f 2- )"
if [ "$nu" != "$1" ]
then
rcut "$nu"
else
echo "$nu"
fi
}
$ export DELIM=.
$ rcut a.b.c.d
d
An alternative using perl would be:
perl -pe 's/(.*) (.*)$/$2/' file
where you may change \t for whichever the delimiter of file is
It is better to use awk while working with tabular data. You don't have to master on command. If it can be achieved by awk, why not use that? I suggest you do not waste your precious time, and use a handful of commands to get the job done.
Example:
# $NF refers to the last column in awk
ll | awk '{print $NF}'
If you have a file named filelist.txt that is a list paths such as the following:
c:/dir1/dir2/file1.h
c:/dir1/dir2/dir3/file2.h
then you can do this:
rev filelist.txt | cut -d"/" -f1 | rev
Adding an approach to this old question just for the fun of it:
$ cat input.file # file containing input that needs to be processed
a;b;c;d;e
1;2;3;4;5
no delimiter here
124;adsf;15454
foo;bar;is;null;info
$ cat tmp.sh # showing off the script to do the job
#!/bin/bash
delim=';'
while read -r line; do
while [[ "$line" =~ "$delim" ]]; do
line=$(cut -d"$delim" -f 2- <<<"$line")
done
echo "$line"
done < input.file
$ ./tmp.sh # output of above script/processed input file
e
5
no delimiter here
15454
info
Besides bash, only cut is used.
Well, and echo, I guess.
choose -1
choose supports negative indexing (the syntax is similar to Python's slices).
I realized if we just ensure a trailing delimiter exists, it works. So in my case I have comma and whitespace delimiters. I add a space at the end;
$ ans="a, b"
$ ans+=" "; echo ${ans} | tr ',' ' ' | tr -s ' ' | cut -d' ' -f2
b

Bash: Flip strings to the other side of the delimiter

Basically, I have a file formatted like
ABC:123
And I would like to flip the strings around the delimiter, so it would look like this
123:ABC
I would prefer to do this with bash/linux tools.
Thanks for any help!
That's reasonably easy with internal bash commands, assuming two fields, as per the following transcript:
pax:~$ x='abc:123'
pax:~$ echo "${x#*:}:${x%:*}"
123:abc
The first substitution ${x#*:} removes everything from the start up to the colon. The second, ${x%:*}, removes everything from the colon to the end.
Then you just re-join them with the colon in-between.
It doesn't matter for your particular data but % and # use the shortest possible pattern. The %% and ## variants will give you the longest possible pattern (greedy).
As an aside, this is ideal if you doing it for one string at a time since you don't need to kick up an external process to do the work for you. But, if you're processing an entire file, there are better ways to do it, such as with awk:
pax:~$ printf "abc:123\ndef:456\nghi:789\n" | awk -F: '{print $2 FS $1}'
123:abc
456:def
789:ghi
#!/bin/sh -x
var1=$(echo -e 'ABC:123' | cut -d':' -f1)
var2=$(echo -e 'ABC:123' | cut -d':' -f2)
echo -e "${var2}":"${var1}"
I use cut to split the string into two parts, and store both of those parts as variables.
From there, it's possible to use echo to re-arrange the variables as you see fit.
Using sed.
sed -E 's/(.*):(.*)/\2:\1/' file.txt
Using paste and cut with process substitution.
paste -d: <(cut -d : -f2 file.txt) <(cut -d : -f1 file.txt)
A slower/slowest shell solution on large set of data/files.
while IFS=: read -r left rigth; do printf '%s:%s\n' "$rigth" "$left"; done < file.txt

Can I use a variable as parameter to AWK's {print}?

I have this bash statement for printing a specific cell from a .csv file.
set `cat $filename | awk -v FS=',' '{print $2}' | head -5 | tail -n 1`
The '{print $2}' part determines the column and the head -5 part determines the row.
Can I substitute a $counter variable in place of $2 (e.g., '{print $counter}')?
The answer is "yes" -- and there are a couple ways to do what you want. The proper way is to declare an awk variable using -v:
awk -F',' -v c=$counter 'NR==6 { print $c; exit }' "$filename"
(You will forgive me for moving some things around to do everything in awk, for passing "$filename" to awk safely, and for getting rid of set and back ticks -- that were doing nothing for the cause.)
Another way to do this is a bit of a "hackish" way -- leveraging shell quoting rules. This method requires some escaping to ensure that the first $ character (that references the intended field in awk) is not interpreted by the shell... The following works in bash (and POSIX sh):
awk -F',' "NR==6 { print \$$counter; exit }" "$filename"
Yes and all pipes could be removed. Variables are passed to awk with -v var=value.
Give a try to this tested version. Provide a value to the ̀col and row variables:
set $(awk -F "," -v col=2 -v row=5 'NR==row {print $col; exit}' "${filename}")
$(command) is prefered to `command`, this later is deprecated.
NR is the current line number.
"${filename}" is expanded by the shell to its value: the double quotes will help if the filename contains some special chars.

How to find the last field using 'cut'

Without using sed or awk, only cut, how do I get the last field when the number of fields are unknown or change with every line?
You could try something like this:
echo 'maps.google.com' | rev | cut -d'.' -f 1 | rev
Explanation
rev reverses "maps.google.com" to be moc.elgoog.spam
cut uses dot (ie '.') as the delimiter, and chooses the first field, which is moc
lastly, we reverse it again to get com
Use a parameter expansion. This is much more efficient than any kind of external command, cut (or grep) included.
data=foo,bar,baz,qux
last=${data##*,}
See BashFAQ #100 for an introduction to native string manipulation in bash.
It is not possible using just cut. Here is a way using grep:
grep -o '[^,]*$'
Replace the comma for other delimiters.
Explanation:
-o (--only-matching) only outputs the part of the input that matches the pattern (the default is to print the entire line if it contains a match).
[^,] is a character class that matches any character other than a comma.
* matches the preceding pattern zero or more time, so [^,]* matches zero or more non‑comma characters.
$ matches the end of the string.
Putting this together, the pattern matches zero or more non-comma characters at the end of the string.
When there are multiple possible matches, grep prefers the one that starts earliest. So the entire last field will be matched.
Full example:
If we have a file called data.csv containing
one,two,three
foo,bar
then grep -o '[^,]*$' < data.csv will output
three
bar
Without awk ?...
But it's so simple with awk:
echo 'maps.google.com' | awk -F. '{print $NF}'
AWK is a way more powerful tool to have in your pocket.
-F if for field separator
NF is the number of fields (also stands for the index of the last)
There are multiple ways. You may use this too.
echo "Your string here"| tr ' ' '\n' | tail -n1
> here
Obviously, the blank space input for tr command should be replaced with the delimiter you need.
This is the only solution possible for using nothing but cut:
echo "s.t.r.i.n.g." | cut -d'.' -f2-
[repeat_following_part_forever_or_until_out_of_memory:] | cut -d'.' -f2-
Using this solution, the number of fields can indeed be unknown and vary from time to time. However as line length must not exceed LINE_MAX characters or fields, including the new-line character, then an arbitrary number of fields can never be part as a real condition of this solution.
Yes, a very silly solution but the only one that meets the criterias I think.
If your input string doesn't contain forward slashes then you can use basename and a subshell:
$ basename "$(echo 'maps.google.com' | tr '.' '/')"
This doesn't use sed or awk but it also doesn't use cut either, so I'm not quite sure if it qualifies as an answer to the question as its worded.
This doesn't work well if processing input strings that can contain forward slashes. A workaround for that situation would be to replace forward slash with some other character that you know isn't part of a valid input string. For example, the pipe (|) character is also not allowed in filenames, so this would work:
$ basename "$(echo 'maps.google.com/some/url/things' | tr '/' '|' | tr '.' '/')" | tr '|' '/'
the following implements A friend's suggestion
#!/bin/bash
rcut(){
nu="$( echo $1 | cut -d"$DELIM" -f 2- )"
if [ "$nu" != "$1" ]
then
rcut "$nu"
else
echo "$nu"
fi
}
$ export DELIM=.
$ rcut a.b.c.d
d
An alternative using perl would be:
perl -pe 's/(.*) (.*)$/$2/' file
where you may change \t for whichever the delimiter of file is
It is better to use awk while working with tabular data. You don't have to master on command. If it can be achieved by awk, why not use that? I suggest you do not waste your precious time, and use a handful of commands to get the job done.
Example:
# $NF refers to the last column in awk
ll | awk '{print $NF}'
If you have a file named filelist.txt that is a list paths such as the following:
c:/dir1/dir2/file1.h
c:/dir1/dir2/dir3/file2.h
then you can do this:
rev filelist.txt | cut -d"/" -f1 | rev
Adding an approach to this old question just for the fun of it:
$ cat input.file # file containing input that needs to be processed
a;b;c;d;e
1;2;3;4;5
no delimiter here
124;adsf;15454
foo;bar;is;null;info
$ cat tmp.sh # showing off the script to do the job
#!/bin/bash
delim=';'
while read -r line; do
while [[ "$line" =~ "$delim" ]]; do
line=$(cut -d"$delim" -f 2- <<<"$line")
done
echo "$line"
done < input.file
$ ./tmp.sh # output of above script/processed input file
e
5
no delimiter here
15454
info
Besides bash, only cut is used.
Well, and echo, I guess.
choose -1
choose supports negative indexing (the syntax is similar to Python's slices).
I realized if we just ensure a trailing delimiter exists, it works. So in my case I have comma and whitespace delimiters. I add a space at the end;
$ ans="a, b"
$ ans+=" "; echo ${ans} | tr ',' ' ' | tr -s ' ' | cut -d' ' -f2
b

How to pass AWK output into variable?

I have a small bash script that greps/awk paragraph by using a keyword.
But after adding in the extra codes : set var = "(......)" it only prints a blank line and not the paragraph.
So I would like to ask if anyone knows how to properly pass the awk output into a variable for outputting?
My codes:
#!/bin/sh
set var = "(awk 'BEGIN{RS=ORS="\n\n";FS=OFS="\n"}/FileHeader/' /root/Desktop
/logs/Default.log)"
echo $var;
Thanks!
Use command substitution to capture the output of a process.
#!/bin/sh
VAR="$(awk 'BEGIN{RS=ORS="\n\n";FS=OFS="\n"}/FileHeader/' /root/Desktop/logs/Default.log)"
echo "$VAR"
some general advice with regards to shell scripting:
(almost) always quote every variable reference.
never put spaces around the equals sign in variable assignment.
You need to use "command substitution". Place the command inside either backticks, `COMMAND` or, in a pair of parentheses preceded by a dollar sign, $(COMMAND).
To set a variable you don't use set and you can't have spaces before and after the =.
Try this:
var=$(awk 'BEGIN{RS=ORS="\n\n";FS=OFS="\n"}/FileHeader/' /root/Desktop/logs/Default.log)
echo $var
You gave me the idea of this for killing a process :). Just chromium to whatever process you wanna kill.
Try this:
VAR=$(ps -ef | grep -i chromium | awk '{print $2}'); kill -9 $VAR 2>/dev/null; unset VAR;
anytime you see grep piped to awk, you can drop the grep. for the above,
awk '/^password/ {print $2}'
awk can easily replace any text command like cut, tail, wc, tr etc. and especally multiple greps piped next to each other. i.e
grep some_co.mand | a | grep b ... to | awk '/a|b|and so on/ {some action}.
Try to create a variable coming from vault/Hashicorp, when using packer template variables, like so:
BUILD_PASSWORD=$(vault read secret/buildAccount| grep ^password | awk '{print $2}')
echo $BUILD_PASSWORD
You can to the same with grep ^user

Resources