I have a list of banners which are at this format:
Hostname: []
IP: xxx.xxx.xxx.xxx
Port: xx
HTTP/1.0 301 Moved Permanently
Location: /login.html
Content-Type: text/html
Device-Access-Level: 255
Content-Length: 3066
Cache-Control: max-age=7200, must-revalidate
I have used the following grep statement in order to grab the ip:
grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}"
What do I have to add to the statement in order to grab the port also? (while still getting the IP.).
Thank you for the answers..!
Why not use awk
awk '/IP:/ {ip=$2} /Port:/ {print ip,$2}' file
When it find line with IP: it stores the IP in variable ip
When it find port, print ip and port number.
Example
cat file
Hostname: []
IP: 163.248.1.20
Port: 843
HTTP/1.0 301 Moved Permanently
Location: /login.html
Content-Type: text/html
Device-Access-Level: 255
Content-Length: 3066
Cache-Control: max-age=7200, must-revalidat
awk '/IP:/ {ip=$2} /Port:/ {print ip,$2}' file
163.248.1.20 843
Related
I setup an IoT hub on Azure, and created a device called "ServerTemp". I generated an SAS token, and it seems to be accepted (I don't get 401). But I'm getting a 400 Bad Request.
Here is the request I'm sending via curl:
curl -v -H"Authorization:SharedAccessSignature sr=A794683.azure-devices.net&sig=<snip>" -H"Content-Type:application/json" -d'{"deviceId":"ServerTemp","temperature":70}' https://A479683.azure-devices.net/devices/ServerTemp/messages/events?api-version=2016-11-14
Request and Response (output from curl):
> POST /devices/ServerTemp/messages/events?api-version=2016-11-14 HTTP/1.1
> Host: A479683.azure-devices.net
> User-Agent: curl/7.47.0
> Accept: */*
> Authorization:SharedAccessSignature sr=A794683.azure-devices.net&sig=<snip>
> Content-Type:application/json
> Content-Length: 42
>
* upload completely sent off: 42 out of 42 bytes
< HTTP/1.1 400 Bad Request
< Content-Length: 151
< Content-Type: application/json; charset=utf-8
< Server: Microsoft-HTTPAPI/2.0
< iothub-errorcode: ArgumentInvalid
< Date: Sun, 15 Apr 2018 22:21:50 GMT
<
* Connection #0 to host A479683.azure-devices.net left intact
{"Message":"ErrorCode:ArgumentInvalid;BadRequest","ExceptionMessage":"Tracking ID:963189cb515345e69f94300655d3ca23-G:10-TimeStamp:04/15/2018 22:21:50"}
What am I doing wrong?
Make sure you add the expiry time &se= (as in &se=1555372034) when you form the SAS. It should be the very last parameter. That's the only way i can reproduce the HTTP 400 you're seeing (by omitting it). You should get a 204 No Content once you fix that.
The resource (&sr=) part also seems a bit off in your case, there's no device being specified. Use Device Explorer to generate a device SAS (or just to see how it should look like): Management > SAS Token.
SAS structure —
SharedAccessSignature sig={signature-string}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI}
$ curl -i https://poorlyfundedskynet.azure-devices.net/devices/dexter/messages/events?api-version=2016-11-14 \
-H "Authorization: SharedAccessSignature sr=poorlyfundedskynet.azure-devices.net%2fdevices%2fdexter&sig=RxxxxxxxtE%3d&se=1555372034" \
-H "Content-Type: application/json" \
-d'{"deviceId":"dexter","temperature":70}'
HTTP/1.1 204 No Content
Content-Length: 0
Server: Microsoft-HTTPAPI/2.0
Date: Sun, 15 Apr 2018 23:54:25 GMT
You can monitor ingress with Device Explorer or iothub-explorer:
Probably this would work as well: Azure IoT Extension for Azure CLI 2.0
I have a cronjob that uses curl to send http post to my home-assistant.io server that in turn uses google_say to make my Google Home tell people to start getting ready in the morning... for a bit of fun. :)
It works great but when trying to add some dynamic content such as saying the day of the week, I'm struggling with the construct of using date within curl. I would also like it determine the number of days until the weekend. I have tried the following:
"message": "Its "'"$(date +%A)"'" morning and x days until the weekend. Time to get ready."
but get an error saying:
<html><head><title>500 Internal Server Error</title></head><body><h1>500 Internal Server Error</h1>Server got itself in trouble</body></html>
Am I wrong in thinking that "'"$(date +%A)"'" should work in this situation? Also I'd like to add how many days until the weekend, probably something like:
6 - $(date +%u)
I appreciate that I could do this very easily by doing some calculations before curl and referencing those but would like to do it in a single line if possible. The line is referenced from an .sh file at present, not a single line in cron.
This is the full line as requested:
curl -X POST -H "x-ha-access: apiPass" -H "Content-Type: application/json" -d '{"entity_id": "media_player.Living_room_Home", "message": "Its "'"$(date +%A)"'" morning and 2 days until the weekend. Time to get ready."}' http://ipAddr:8123/api/services/tts/google_say?api_password=apiPass
Thanks.
It works perfectly fine with this line:
curl --trace-ascii 1 -X POST -H "x-ha-access: apiPass" -H "Content-Type: application/json" -d '{"entity_id": "media_player.Living_room_Home", "message": "Its '$(date +%A)' morning and 2 days until the weekend. Time to get ready."}'
With result:
== Info: Trying ::1...
== Info: TCP_NODELAY set
== Info: Connected to localhost (::1) port 80 (#0)
=> Send header, 197 bytes (0xc5)
0000: POST /api/services/tts/google_say?api_password=apiPass HTTP/1.1
0041: Host: localhost
0052: User-Agent: curl/7.50.3
006b: Accept: */*
0078: x-ha-access: apiPass
008e: Content-Type: application/json
00ae: Content-Length: 130
00c3:
=> Send data, 130 bytes (0x82)
0000: {"entity_id": "media_player.Living_room_Home", "message": "Its T
0040: uesday morning and 2 days until the weekend. Time to get ready.
0080: "}
== Info: upload completely sent off: 130 out of 130 bytes
<= Recv header, 24 bytes (0x18)
0000: HTTP/1.1 404 Not Found
<= Recv header, 28 bytes (0x1c)
0000: Server: Microsoft-IIS/10.0
<= Recv header, 37 bytes (0x25)
0000: Date: Tue, 07 Nov 2017 21:12:21 GMT
<= Recv header, 19 bytes (0x13)
0000: Content-Length: 0
<= Recv header, 2 bytes (0x2)
0000:
== Info: Curl_http_done: called premature == 0
== Info: Connection #0 to host localhost left intact
Would this help?
echo $(( $(date -d 'next saturday' +%j) - $(date +%j) - 1 )) days until the weekend
The -d option in GNU date lets you provide a surprisingly flexible description of the date you want.
Please i want to use the cURL command in linux OS to return as a result just the http response code which is "200" if it is okey
am using that command:
curl -I -L domain.com
but this is returning for me a full text like this
HTTP/1.1 **200** OK
Date: Thu, 27 Feb 2014 19:32:45 GMT
Server: Apache/2.2.25 (Unix) mod_ssl/2.2.25 OpenSSL/1.0.0-fips mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.4.19
X-Powered-By: PHP/5.4.19
Set-Cookie: PHPSESSID=bb8aabf4a5419dbd20d56b285199f865; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Vary: Accept-Encoding,User-Agent
Content-Type: text/html
so please i just need the response code and not the whole text
Regards
curl -s -o out.html -w '%{http_code}' http://www.example.com
Running the following will supress the output so you won't have any cleanup.
curl -s -o /dev/null -w '%{http_code}' 127.0.0.1:80
Example above uses localhost and port 80
I want to sort and calculate how much clients downloaded files (3 types) from my server.
I installed tshark and ran followed command that should capture GET requests:
`./tshark 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' -R'http.request.method == "GET"'`
so sniffer starts to work and every second I get new row, here is a result:
0.000000 144.137.136.253 -> 192.168.4.7 HTTP GET /pids/QE13_593706_0.bin HTTP/1.1
8.330354 1.1.1.1 -> 2.2.2.2 HTTP GET /pids/QE13_302506_0.bin HTTP/1.1
17.231572 1.1.1.2 -> 2.2.2.2 HTTP GET /pids/QE13_382506_0.bin HTTP/1.0
18.906712 1.1.1.3 -> 2.2.2.2 HTTP GET /pids/QE13_182406_0.bin HTTP/1.1
19.485199 1.1.1.4 -> 2.2.2.2 HTTP GET /pids/QE13_302006_0.bin HTTP/1.1
21.618113 1.1.1.5 -> 2.2.2.2 HTTP GET /pids/QE13_312106_0.bin HTTP/1.1
30.951197 1.1.1.6 -> 2.2.2.2 HTTP GET /nginx_status HTTP/1.1
31.056364 1.1.1.7 -> 2.2.2.2 HTTP GET /nginx_status HTTP/1.1
37.578005 1.1.1.8 -> 2.2.2.2 HTTP GET /pids/QE13_332006_0.bin HTTP/1.1
40.132006 1.1.1.9 -> 2.2.2.2 HTTP GET /pids/PE_332006.bin HTTP/1.1
40.407742 1.1.2.1 -> 2.2.2.2 HTTP GET /pids/QE13_452906_0.bin HTTP/1.1
what I need to do to store results type and count like /pids/*****.bin in to other file.
Im not strong in linux but sure it can be done with 1-3 rows of script.
Maybe with awk but I don't know what is the technique to read result of sniffer.
Thank you,
Can't you just grep the log file of your webserver?
Anyway, to extract the lines of captured http traffic relative to your server files, just try with
./tshark 'tcp port 80 and \
(((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' \
-R'http.request.method == "GET"' | \
egrep "HTTP GET /pids/.*.bin"
When I try to ping or retrieve an invalid domain, I get redirect to default domain on my local server.
ex:
trying to ping www.invaliddomainnameexample.com from my server s1.mylocaldomain.com
~: ping www.invaliddomainnameexample.com
PING www.invaliddomainnameexample.com.mylocaldomain.com (67.877.87.128) 56(84) bytes of data.
64 bytes from mylocaldomain.com (67.877.87.128): icmp_seq=1 ttl=64 time=0.040 ms
64 bytes from mylocaldomain.com (67.877.87.128): icmp_seq=2 ttl=64 time=0.039 ms
or using curl
~: curl -I www.invaliddomainnameexample.com
HTTP/1.1 301 Moved Permanently
Date: Mon, 26 Nov 2012 16:09:57 GMT
Content-Type: text/html
Content-Length: 223
Connection: keep-alive
Keep-Alive: timeout=10
Location: http://mylocaldomain.com/
my resolve.conf
~: cat /etc/resolv.conf
nameserver 8.8.8.8
nameserver 8.8.4.4
Could it be that your /etc/resolv.conf also contains a
search mylocaldomain.com
statement and there's an „*“ DNS A RR for your domain?
Because then the searchlist is applied, the * record matches, and voilà!
Try ping www.invaliddomainnameexample.com. with a dot appended to mark the domain name as a FQDN which prevents applying the searchlist.
It looks like the only way to fix this is to disallow unknown hosts to be processed by Http server. Though I did that only for local IPs
I use Nginx so the config would be
#list of server and local IPs
geo $local {
default 0;
34.56.23.0/21 1;
127.0.0.1/32 1;
}
#Deny access to any host other
server {
listen 80 default_server;
server_name _;
if ( $local = 1 ){
return 405;
}
rewrite ^(.*) http://mylocaldomain.com$1 permanent;
}