How to replace string with multiple semicolons and special characters using sed in Linux [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a string "config"
and want to replace with
"server"
using sed in Linux. I tried the below one. But It did not work.
sed -i "s#$"config"#$"server"#g" setup.xml-->
How can I do that? If not sed other options are fine too.
before "config"
after "server"

One example:
sed 's/"config"/"\s\e\r\v\e\r"/' setup.xml

The replacement string has characters with special meaning in sed such as ; # and &. These will all need to be escaped and so:
sed -n 's/"config"/"\&\#115\;\&\#101\;\&\#114\;\&\#118\;\&\#101\;\&\#114\;"/p' <<< '"config"'

Related

Extracting month from day using linux terminal [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am having a text file containing a list of date and time just like the sample below -
posted_at"
2012-06-09 11:48:31"
2012-08-09 12:40:02"
2012-04-09 13:10:00"
2012-03-09 13:40:00"
2012-10-09 14:30:01"
2012-12-09 15:30:00"
2012-11-09 16:20:00"
I want to extract the month from each line.
P.S - grep should not be used at any point of the code
Thanks in advance!
First select date pattern :
egrep '[0-9]{4}-[0-9]{2}-[0-9]{2} ' content_file
Second, extract the month :
awk -F '-' '{print $2}'
Third redirect to desired file :
>> desired_file
So mix of all this with | to final solution :
egrep '[0-9]{4}-[0-9]{2}-[0-9]{2} ' content_file| awk -F '-' '{print $2}'>> desired_file
VoilĂ 

Parsing a conf file in bash [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Here's my config file
#comment 1
--longoption1
#comment 2
--longoption2
#comment 3
-s
#comment 4
--longoption4
I want to write a bash script that will read this .conf file, skip comments and serialize the commandline options like so.
./binary --longoption1 --longoption2 -s --longoption4
Working off of this post on sed, you just need to pipe the output from sed to xargs:
sed -e 's/#.*$//' -e '/^$/d' inputFile | xargs ./binary
As Wiimm points out, xargs can be finicky with a lot of arguments and it might split it up across multiple calls to binary. It may be better off to use sed directly:
./binary $(sed -e 's/#.*$//' -e '/^$/d' inputFile)

A bash loop to echo all possible ASCII characters [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I know how to print all letters
{a..z} and {A..Z} and {0..9}
But is there a way to print all possible ASCII Characters via a bash loop?
You don't need a loop
echo -e \\x{0..7}{{0..9},{A..F}}
It prints all chars from 0 to 127.
If it is okay to use awk:
awk 'BEGIN{for (i=32;i<127;i++) printf("%c", i)}'
Or using printf:
for((i=32;i<127;i++)) do printf "\x$(printf %x $i)"; done
use this:
for ((i=32;i<127;i++)) do printf "\\$(printf %03o "$i")"; done;printf "\n"

How to remove 'www.' with awk in output file [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How can I remove all the 'www.' with awk in my output file.
e.g.: my output file has multiple sites like
abc.com
www.def.com
blabla.org
www.zxc.net
I would like to remove all the www. in my output file:
abc.com
def.com
blabla.org
zxc.net
Probably better done in sed:
sed -i 's/^www\.//g' outputFile
In awk:
awk '{gsub(/^www\./,"",$0)}1' outputFile
This is probably what you're looking for:
$ cat file
abc.com
www.def.com
blabla.org
www.zxc.net
www.org
www.acl.lanl.gov
$ sed -E 's/^www\.(([^.]+(\.|$)){2,})/\1/' file
abc.com
def.com
blabla.org
zxc.net
www.org
acl.lanl.gov
The above uses a sed that has -E for ERE support, e.g. GNU or OSX sed. Note the need for a more comprehensive input file to test if a proposed solution really works or not.

Linux Compare two text files [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have two text file like below:
File1.txt
A|234-211
B|234-244
C|234-351
D|999-876
E|456-411
F|567-211
File2.txt
234-244
999-876
567-211
And I want to compare both files and get containing values like below:
Dequired output
B|234-244
D|999-876
F|567-211
$ grep -F -f file2.txt file1.txt
B|234-244
D|999-876
F|567-211
The -F makes grep search for fixed strings (not patterns). Both -F and -f are POSIX options to grep.
Note that this assumes your file2.txt does not contain short strings like 11 which could lead to false positives.
Try:
grep -f File2.txt File1.txt

Resources