How to create a script to add sed command into a file (bash script) - linux

I have .csv file that contain 2 columns delimited with ,.
file.csv
word1,word2
word3,word4
word5,word6
.
.
.
.
word1000,1001
I want to create a new file from file.csv and insert sed command like this:
mynewfile
sed -e 's,word1,word2,gI' \
-e 's,word3,word4,gI' \
-e 's,word5,word6,gI' \
....
How can I make a script to add sed command?

You can use sed to process each line:
echo -n 'sed ' ; sed -e "s/^\(.*\)/-e 's,\1,gl'\ \\\/" file.csv
will produce as requested
sed -e 's,word1,word2,gl' \
-e 's,word3,word4,gl' \
-e 's,word5,word6,gl' \

Your goal seams to be performing custom replacements from a file. In this case, I would not generate a file containing a bash script to do the job, but I would generate a sed script to do the job:
sed -e 's/^/s,/' -e 's/$/,gI/' file.csv > sed_script
sed -f sed_script <<< "word1"
We can even avoid to use the sed_script file with bash's process substitution:
sed -f <(sed -e 's/^/s,/' -e 's/$/,gI/' file.csv) <<< "word1"
Update:
Simplifying the sed script generation, it becomes:
sed -e 's/.*/s,&,gI/' file.csv > sed_script
sed -f sed_script <<< "word1"
and
sed -f <(sed -e 's/.*/s,&,gI/' file.csv) <<< "word1"

Related

sed isn't working when it's piped from another sed command

I'm trying to prepare my output for a grep expression, but when I try to modify the data to get it in the format I want I'm having issues getting it the way I want.
I'm using the following command to get a list of IP addresses that I need.
PRIV_IP=$(aws ec2 describe-instances \
--region "${REGION}" \
--output text \
--query 'Reservations[].Instances[].[PrivateIpAddress]' \
--filters Name=tag:TagA,Values="${TagAData}" \
Name=tag:TagB,Values="HOME" \
Name=tag:TagC,Values="MAIN" | sed 's/\./-/g' | sed 's/ /\\|/g')
This is the output of the command; it ignores the last sed statement.
echo $PRIV_IP
1-2-3-4 5-6-7-8 9-10-11-12
If I perform the sed manually it works as intended.
echo $PRIV_IP | sed 's/ /\\|/g'
1-2-3-4\|5-6-7-8\|9-10-11-12
Can someone provide some input on what I'm doing incorrectly?
It could be that your real command prints TABs but in your test they got converted to space already, e.g.
$ echo -e "A\tB"
A B
$ echo -e "A\tB" | sed -e 's/ /X/g'
A B
$ a=$(echo -e "A\tB"); echo $a
A B
$ echo $a | sed -e 's/ /X/g')
AXB
Solution: replace all white space as suggested by the comments, i.e.
$ echo -e "A\tB" | sed -e 's/[[:space:]]/X/g'
AXB

how to sed replace whole line with the string as variable?

how to sed replace whole line with the string as variable ?
#!/bin/bash
ssh $1 ssh-keyscan -t rsa $1 > /tmp/$1
RSA=$(cat /tmp/$1)
echo $RSA
sed -i 's:'^"$1".*':'"$RSA"':' /etc/ssh/ssh_known_hosts
cat /etc/ssh/ssh_known_hosts | grep $1
It is storing the variable in RSA but not replacing, not sure what's wrong with sed part.
You can use the following command.
#!/bin/bash
ssh $1 ssh-keyscan -t rsa $1 > /tmp/$1
RSA=$(cat /tmp/$1)
echo $RSA
sed -i -e "/$1/ d" -e "/^$1/ a $RSA" /etc/ssh/ssh_known_hosts
cat /etc/ssh/ssh_known_hosts | grep $1
I modified as per your requirement to add if not line exists and replace if it exists.

Search for string within html link on webpage and download the linked file

I am trying to write a linux script to search for a link on a web page and download the file from that link...
the webpage is:
http://ocram.github.io/picons/downloads.html
The link I am interested in is:
"hd.reflection-black.7z"
The original way I was doing this was using these commands..
lynx -dump -listonly http://ocram.github.io/picons/downloads.html &> output1.txt
cat output1.txt | grep "17" &> output2.txt
cut -b 1-6 --complement output2.txt &> output3.txt
wget -i output3.txt
I am hoping there is an easier way to search the webpage for the link "hd.reflection-black.7z" and save the linked file.
The files are stored on google drive which does not contain the filename in the url, hence the use of "17" in second line of code above..
#linuxnoob, if you to download the file (curl is more powerfull than wget):
curl -L --compressed `(curl --compressed "http://ocram.github.io/picons/downloads.html" 2> /dev/null | \
grep -o '<a .*href=.*>' | \
sed -e 's/<a /\n<a /g' | \
grep hd.reflection-black.7z | \
sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d')` > hd.reflection-black.7z
without indentation, for your script:
curl -L --compressed `(curl --compressed "http://ocram.github.io/picons/downloads.html" 2> /dev/null | grep -o '<a .*href=.*>' | sed -e 's/<a /\n<a /g' | grep hd.reflection-black.7z | sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d')` > hd.reflection-black.7z 2>/dev/null
You can try it!
What about?
curl --compressed "http://ocram.github.io/picons/downloads.html" | \
grep -o '<a .*href=.*>' | \
sed -e 's/<a /\n<a /g' | \
grep hd.reflection-black.7z | \
sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d'
I'd try to avoid using regular expressions since they tend to break in unexpected ways (e.g. the output is split in more than one line for some reason).
I suggest to use a scripting language like Ruby or Python, where higher level tools are available.
The following example is in Ruby:
#!/usr/bin/ruby
require 'rubygems'
require 'nokogiri'
require 'open-uri'
main_url = ARGV[0] # 'http://ocram.github.io/picons/downloads.html'
filename = ARGV[1] # 'hd.reflection-black.7z'
doc = Nokogiri::HTML(open(main_url))
url = doc.xpath("//a[text()='#{filename}']").first['href']
File.open(filename,'w+') do |file|
open(url,'r' ) do |link|
IO.copy_stream(link,file)
end
end
Save it to a file like fetcher.rb and then you can use it with
ruby fetcher.rb http://ocram.github.io/picons/downloads.html hd.reflection-black.7z
To make it work you'll have to install Ruby and the Nokogiri library (both are available on most distro's repositories)

dynamically run linux shell commands

I have a command that should be executed by a shell script.
Actually the command does not matter the only thing that is important the further command execution and the right escaping of the critical parts.
The command that usually is executed normally in putty is something like this(maybe some additional flags for ls)
rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash`
but now I have a batch of such command so I would like to execute them in a loop
like
for i in {0..100}
do
str=str$i
${!str}
done
where str is :
str0="rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash`"
str1="rm -r `ls /test/parse_second/ | awk '{print $2}' | grep trash`"
and that gives me a lot of headache cause the execution done by ${!str} brakes the quotations and inline shell between `...` marks
my_rm() { rm -r `ls /test/$1 | awk ... | grep ... `; }
for i in `whatevr`; do
my_rm $i
done;
Getting this right is surprisingly tricky, but it can be done:
for i in $(seq 0 100)
do
str=str$i
eval "eval \"\$$str\""
done
You can also do:
for i in {0..10}
do
<whatevercommand>
done
It's actually simpler to place them on arrays and use glob patterns:
#!/bin/bash
shopt -s nullglob
DIRS=("/test/parse_first/" "/test/parse_second/")
for D in "${DIRS[#]}"; do
for T in "$D"/*trash*; do
rm -r -- "$T"
done
done
And if rm could accept multiple arguments, you don't need to have an extra loop:
for D in "${DIRS[#]}"; do
rm -r -- "$D"/*trash*
done
UPDATE:
#!/bin/bash
readarray -t COMMANDS <<'EOF'
rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash
rm -r `ls /test/parse_second/ | awk '{print $2}' | grep trash
EOF
for C in "${COMMANDS[#]}"; do
eval "$C"
done
Or you could just read commands from another file:
readarray -t COMMANDS < somefile.txt

Find and highlight text in linux command line

I am looking for a linux command that searches a string in a text file,
and highlights (colors) it on every occurence in the file, WITHOUT omitting text lines (like grep does).
I wrote this handy little script. It could probably be expanded to handle args better
#!/bin/bash
if [ "$1" == "" ]; then
echo "Usage: hl PATTERN [FILE]..."
elif [ "$2" == "" ]; then
grep -E --color "$1|$" /dev/stdin
else
grep -E --color "$1|$" $2
fi
it's useful for stuff like highlighting users running processes:
ps -ef | hl "alice|bob"
Try
tail -f yourfile.log | egrep --color 'DEBUG|'
where DEBUG is the text you want to highlight.
command | grep -iz -e "keyword1" -e "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case, -z for treating as a single file)
Alternatively,while reading files
grep -iz -e "keyword1" -e "keyword2" 'filename'
OR
command | grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case,-A and -B for no of lines before/after the keyword to be displayed)
Alternatively,while reading files
grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" 'filename'
command ack with --passthru switch:
ack --passthru pattern path/to/file
I take it you meant "without omitting text lines" (instead of emitting)...
I know of no such command, but you can use a script such as this (this one is a simple solution that takes the filename (without spaces) as the first argument and the search string (also without spaces) as the second):
#!/usr/bin/env bash
ifs_store=$IFS;
IFS=$'\n';
for line in $(cat $1);
do if [ $(echo $line | grep -c $2) -eq 0 ]; then
echo $line;
else
echo $line | grep --color=always $2;
fi
done
IFS=$ifs_store
save as, for instance colorcat.sh, set permissions appropriately (to be able to execute it) and call it as
colorcat.sh filename searchstring
I had a requirement like this recently and hacked up a small program to do exactly this. Link
Usage: ./highlight test.txt '^foo' 'bar$'
Note that this is very rough, but could be made into a general tool with some polishing.
Using dwdiff, output differences with colors and line numbers.
echo "Hello world # $(date)" > file1.txt
echo "Hello world # $(date)" > file2.txt
dwdiff -c -C 0 -L file1.txt file2.txt

Resources