shell script grep only working on last line of file - linux

I have a very strange problem I've never encountered and could not find anything related on other posts here, I have a shell script that just greps a test host file(will use real one once this works) in a while loop but it will only output the last line:
#!/bin/bash
while read line
do
grep $line /etc/test/hosts
done < temp-file.txt
here is the /etc/test/hosts file:
device1 192.168.0.1
device2 192.168.0.2
device3 192.168.0.3
// cut
device50 192.168.0.50
the temp-file.txt is identical to the test host file for testing
And as mentioned, here is the output - only showing last line:
device3 192.168.0.50
If I change the grep command to just echo $line it outputs correctly. I have tried changing the number of lines in the test host file to be more and less but same result. I have also done the grep command from cli and it works fine on every device I grep for. I have also tried putting the $line in double quotes but that also has not changed anything.
I have also tried an alternate while loop using the while IFS="\n" method but same results. This seems like it should be extraordinarily simple but I'm having issues. Am I doing something wrong or is this maybe a bug in bash?

question answered by Eric Renouf via comment.
solution:
add sed -i 's/\r$//' temp-file.txt to top of script to remove the \r that were at the end of each line.

Related

How to replace text strings (by bulk) after getting the results by using grep

One of my Linux MySQL servers suffered from a crash. So I put back a backup, however this time the MySQL is running local (localhost) instead of remotely (IP-address).
Thanks to Stack Overflow users I found an excellent command to find the IP-address in all .php files in a given directory! The command I am using for this is:
grep -r -l --include="*.php" "100.110.120.130" .
This outputs the necessary files with its location ofcourse. If it were less than 10 results, I would simply change them by hand obviously. However I received over 200 hits/results.
So now I want to know if there is a safe command which replaces the IP-address (example: 100.110.120.130) with the text "localhost" instead for all .php files in the given directory (/var/www/vhosts/) recursively.
And maybe, if only possible and not to much work, also output the changed lines to a file? I don't know if thats even possible.
Maybe someone can provide me with a working solution? To be honest, I dont dare to fool around out of the blue with this. Thats why I created a new thread.
The most standard way of replacing a string in multiple files would be to use a tool such as sed. The list of files you've obtained via grep could be read line by line (when output to a file) using a while loop in combination with sed.
$ grep -r -l --include="*.php" "100.110.120.130" . > list.txt
# this will output all matching files to list.txt
Replacing IP in matched files:
while read -r line ; do echo "$line" >> updated.txt ; sed -i 's/100.110.120.130/localhost/g' "${line}" ; done<list.txt
This will take list.txt and read it line by line to the sed command which should replace all occurrences of the IP to "localhost". The echo command directly before sed outputs all the filenames that will be modified into a file updated.txt (it isn't necessary though as list.txt contains the same exact filenames, although it could be used as a means of verification perhaps).
To do a dry run before modifying all of the matched files remove the
-i from the sed command and it will print the output to stdout
instead of in-place modifying the files.

Receiving "curl: (3) Illegal characters found in URL" in a Linux Script

I'm currently working on a script that reads in several hundred IP addresses listed line-by-line in a file. The script is supposed to take the IP addresses and then output the IP address and its longitude and latitude to a new file. However, whenever I try to run the script I received multiple "curl: (3) Illegal characters found in URL". I've been troubleshooting it for a couple of days, and so far I've come up with nothing. Can anyone put me in the right direction to figuring out the problem?
Thanks in advance for any help.
This is the script I'm using.
#!/bin/bash
cat ipCheck.txt | while read line
do
curl "https://api.ipstack.com/"$line"access_key=9c04ea7631a32590cac23eb27ec6c104&foraat=1&fields=ip,latitude,longitude"
done >> locations.txt
I'm currently using a test text file with 10 IP addresses. It is as follows
101.249.211.209
102.165.32.39
102.165.35.37
102.165.49.193
103.27.125.18
103.3.61.193
103.78.132.4
104.143.83.13
104.143.83.8
104.149.216.71
Nothing jumps out as being wrong. Have you tried commenting out your curl line and running the loop with something like:
cat ipCheck.txt | while read line
do
#curl "https://api.ipstack.com/"$line"access_key=9c04ea7631a32590cac23eb27ec6c104&foraat=1&fields=ip,latitude,longitude"
echo "\"$line\""
done >> locations.txt
This should help to locate an extra space or something strange with the formatting.
If it isn't the formatting in your file, you could try:
cat ipCheck.txt | while read line
do
curl "https://api.ipstack.com/${line}access_key=9c04ea7631a32590cac23eb27ec6c104&foraat=1&fields=ip,latitude,longitude"
done >> locations.txt
This should give the same result, but you could have something strange going on with your environment.
According to a previous comment a solution is to remove any additional character like the carriage return \r and checking using printf:
while IFS= read -r line; do
# fix `\r` at end
line=$(echo ${line} | tr -d '\r')
printf 'Downloading %q\n' "$line"
./download.sh $FOLDER $line
done < "$DS"

AWK taking action on a running output in bash

I need to use the output of a command, which is:
nmap -n -Pn -sS --vv <*IPs*>
The output of this command is devided into 2 parts; first a discovery and after that a port scan, both of them separated by the very first line that says
Nmap scan report for <FirstIP>
What I need is the output of this first part, which by the way is the fastest one, and I need to pipe it to a further command/s (awk, grep or whatever) to filter only the IP addresses. Obviously, the intention of doing this is the need of stopping the running command exactly when the line "Nmap scan report for <*FirstIP*>" appears on the shell (first, because I don't need the other part and second, because the other part takes too much time!)
I found a very close solution here but it didn't worked because it executes both commands (nmap and awk) but there's no output in stdout in shell.
I would be looking for something like this:
nmpa -n -Pn -sS --vv <*IPs*> | awk '/Not shown/ {system("pkill nmap")}' | awk '/^Discovered/{print $6}'
But obviously this doesn't work.
Any ideas?
Most flavors of awk do buffering. There's no option to do what you're doing in gawk, but if you use mawk, you can give it the -Winteractive option, which does not buffer.
Incidentally, you are running two awks, but you only need one:
nmap -n -Pn -sS --vv <*IPs*> | mawk -Winteractive '/Not shown/ {system("pkill nmap")} /^Discovered/{print $6}'
Every predicate in awk runs the associated block. (Although I love awk, this use case might match expect better.)
Given this command line:
left-side | right-side
The problems you have are:
The shell will buffer the output of any command going to a pipe so you may not see "Not shown" in the right-side command until after the left-side command has finished running, and
You want the left-side command to stop running as soon as the right-side command sees "Not shown"
For the first problem, use stdbuf or similar.
For the second - exiting the right-side command will send a terminate signal back to the left-side command so you don't need to do anything else, what you want to happen will simply happen when you exit the right-side command.
So your command line would be something like:
nmap -n -Pn -sS --vv <*IPs*> |
stdbuf awk '/Not shown/{exit} /^Discovered/{print $6}'
idk if you meant to use "Not shown" or "Nmap scan report for" in your sample code. Use whichever is the string you want awk to exit at.

Linux shell script - String trimming for dash

I am attempting to get the mac address from my Raspberry Pi take the last 6 characters of the mac to use as the hostname alongside a fixed string.
here is what I'v managed to get working from other sources so far, but I am now totally stuck trying to trim the string down.
#!/bin/sh -e
MAC="$( sed "s/^.*macaddr=\([0-9A-F:]*\) .*$/\1/;s/://g" /proc/cmdline )"
MAC1="${MAC??????%}"
echo "$MAC1"
the shell being used by the Pi appears to be Dash, so the usual BASH commands that would have this done in no-time don't want to work or seem to generate errors when run within the script.
The full script that I am using in rc.local is below.
any advice on a way to-do this would be greatly received.
MAC="pi""$( sed "s/^.*macaddr=\([0-9A-F:]*\) .*$/\1/;s/://g" /proc/cmdline )"
echo "$MAC" > "/etc/hostname"
CURRENT_HOSTNAME=$(cat /proc/sys/kernel/hostname)
sed -i "s/127.0.1.1.*$CURRENT_HOSTNAME/127.0.1.1\t$MAC/g" /etc/hosts
hostname $MAC
If you have the cut command on your Pi, you could
do
MAC1=$( echo $MAC | cut -c 7-12 )
Since you're already using sed to process the string, I'd suggest adding another command:
MAC=$(sed -e 's/^.*macaddr=\([0-9A-F:]*\) .*$/\1/' \
-e 's/://g' \
-e 's/.*\(.\{6\}\)/\1/' /proc/cmdline)
The extra sed command extracts the last 6 characters from each line (I assume that you only have one?). You can combine the commands into a single string if you prefer, though I find this approach to be more readable.

Bug echo-ing multiple numerical values separated by commas

I have a weird bug, basically I have set up a function to run remote commands via SSH on a box and get the lan MAC address and some other info. I want to write this info into a csv file.
When I run BOXLANMAC=$(remote_command "ifconfig eth0 | grep HWaddr | cut -d' ' -f11") I can echo $BOXLANMAC and get the expected output.
However, when I run echo $BOXLANMAC,$BOXLANMAC I get ,XX:XX:XX:XX:XX:XX where I expect to see XX:XX:XX:XX:XX:XX,XX:XX:XX:XX:XX:XX. I have tried many permutations of the echo command, using quotes and escape characters for the comma, but not had any success. I'm sure this is really simple and I should have been able to figure it out, but google seems to just get me results about splitting strings on commas.
As das.cyklone and I mentioned above, you might want to see if the remote command is returning whitespaces which are affecting the output of the echo command. Perhaps using tr to remove any whitespaces from the $boxlanmac variable will solve the problem. See How to trim whitespace from a Bash variable? for ways to do this.
Your example seems to work fine during testing. I can offer two suggestions:
1) Use awk instead of cut for whitespace. It's more reliable and easier to test.
2) If echo isn't working, try printf. It gives you more control with statements.
~-> BOXLANMAC=$(ifconfig eth0 | grep HWaddr | awk '{ print $5 }')
~-> echo ${BOXLANMAC}
78:2B:CB:88:E5:AR
~-> echo ${BOXLANMAC},${BOXLANMAC}
78:2B:CB:88:E5:AR,78:2B:CB:88:E5:AR
~-> printf "%s,%s\n" ${BOXLANMAC} ${BOXLANMAC}
78:2B:CB:88:E5:AR,78:2B:CB:88:E5:AR
~->
Good luck.
Have you tried: echo ${BOXLANMAC},${BOXLANMAC} ?

Resources