I am trying to do some tests using the wget and I noticed that HTTPS pages take significant to load in wget than http for the same server.
This does not seem related to any network difference. Before the name resolution wget takes about 5 extra seconds.
Can anyone help? How can I overcome this? I was looking to using wget with -p and -H options to evaluate network performance when I noticed this.
xbian#xbian ~ $ wget -V
GNU Wget 1.13.4 built on linux-gnueabihf.
+digest +https +ipv6 +iri +large-file +nls -ntlm +opie +ssl/gnutls
Wgetrc:
/etc/wgetrc (system)
Locale: /usr/share/locale
Compile: gcc -DHAVE_CONFIG_H -DSYSTEM_WGETRC="/etc/wgetrc"
-DLOCALEDIR="/usr/share/locale" -I. -I../lib -I../lib
-D_FORTIFY_SOURCE=2 -Iyes/include -g -O2 -fstack-protector
--param=ssp-buffer-size=4 -Wformat -Werror=format-security
-DNO_SSLv2 -D_FILE_OFFSET_BITS=64 -g -Wall
Link: gcc -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat
-Werror=format-security -DNO_SSLv2 -D_FILE_OFFSET_BITS=64 -g -Wall
-Wl,-z,relro -Lyes/lib -lgnutls -lgcrypt -lgpg-error -lz -lidn -lrt
ftp-opie.o gnutls.o ../lib/libgnu.a
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<http://www.gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Originally written by Hrvoje Niksic <hniksic#xemacs.org>.
Please send bug reports and questions to <bug-wget#gnu.org>.
xbian#xbian ~ $ time wget -d -v --no-check-certificate --delete-after -4 http://www.google.pt 2>&1 | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush(); }'
2015-02-07 01:10:57 Setting --verbose (verbose) to 1
2015-02-07 01:10:57 Setting --check-certificate (checkcertificate) to 0
2015-02-07 01:10:57 Setting --delete-after (deleteafter) to 1
2015-02-07 01:10:57 Setting --inet4-only (inet4only) to 1
2015-02-07 01:10:57 DEBUG output created by Wget 1.13.4 on linux-gnueabihf.
2015-02-07 01:10:57
2015-02-07 01:10:57 URI encoding = `UTF-8'
2015-02-07 01:10:57 --2015-02-07 01:10:57-- http://www.google.pt/
2015-02-07 01:10:57 Resolving www.google.pt (www.google.pt)... 213.30.5.52, 213.30.5.24, 213.30.5.18, ...
2015-02-07 01:10:57 Caching www.google.pt => 213.30.5.52 213.30.5.24 213.30.5.18 213.30.5.25 213.30.5.59 213.30.5.31 213.30.5.45 213.30.5.46 213.30.5.39 213.30.5.53 213.30.5.32 213.30.5.38
2015-02-07 01:10:57 Connecting to www.google.pt (www.google.pt)|213.30.5.52|:80... connected.
2015-02-07 01:10:57 Created socket 3.
2015-02-07 01:10:57 Releasing 0x003b8040 (new refcount 1).
2015-02-07 01:10:57
2015-02-07 01:10:57 ---request begin---
2015-02-07 01:10:57 GET / HTTP/1.1
2015-02-07 01:10:57 User-Agent: Wget/1.13.4 (linux-gnueabihf)
2015-02-07 01:10:57 Accept: */*
2015-02-07 01:10:57 Host: www.google.pt
2015-02-07 01:10:57 Connection: Keep-Alive
2015-02-07 01:10:57
2015-02-07 01:10:57 ---request end---
2015-02-07 01:10:58 HTTP request sent, awaiting response...
2015-02-07 01:10:58 ---response begin---
2015-02-07 01:10:58 HTTP/1.1 200 OK
2015-02-07 01:10:58 Date: Sat, 07 Feb 2015 01:10:58 GMT
2015-02-07 01:10:58 Expires: -1
2015-02-07 01:10:58 Cache-Control: private, max-age=0
2015-02-07 01:10:58 Content-Type: text/html; charset=ISO-8859-1
2015-02-07 01:10:58 Set-Cookie: PREF=ID=98608883e4031983:FF=0:TM=1423271458:LM=1423271458:S=BnwaLDxFbjCUyPnF; expires=Mon, 06-Feb-2017 01:10:58 GMT; path=/; domain=.google.pt
2015-02-07 01:10:58 Set-Cookie: NID=67=AkXpY2nJPDDcH7xKJkslxdCtflnhOZJiNwZdu4YBAIc2FnjIZIAYHzFuln5boxiOHq1WWBdbcTnLXwPqOrfxOxkLXtO2U5UAVBCU0nVcgyC61_YLZLXGR0Fmdi9M_fIp; expires=Sun, 09-Aug-2015 01:10:58 GMT; path=/; domain=.google.pt; HttpOnly
2015-02-07 01:10:58 P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
2015-02-07 01:10:58 Server: gws
2015-02-07 01:10:58 X-XSS-Protection: 1; mode=block
2015-02-07 01:10:58 X-Frame-Options: SAMEORIGIN
2015-02-07 01:10:58 Alternate-Protocol: 80:quic,p=0.02
2015-02-07 01:10:58 Accept-Ranges: none
2015-02-07 01:10:58 Vary: Accept-Encoding
2015-02-07 01:10:58 Transfer-Encoding: chunked
2015-02-07 01:10:58
2015-02-07 01:10:58 ---response end---
2015-02-07 01:10:58 200 OK
2015-02-07 01:10:58 cdm: 1 2 3 4 5 6 7 8
2015-02-07 01:10:58 Stored cookie google.pt -1 (ANY) / <permanent> <insecure> [expiry 2017-02-06 01:10:58] PREF ID=98608883e4031983:FF=0:TM=1423271458:LM=1423271458:S=BnwaLDxFbjCUyPnF
2015-02-07 01:10:58 cdm: 1 2 3 4 5 6 7 8
2015-02-07 01:10:58 Stored cookie google.pt -1 (ANY) / <permanent> <insecure> [expiry 2015-08-09 02:10:58] NID 67=AkXpY2nJPDDcH7xKJkslxdCtflnhOZJiNwZdu4YBAIc2FnjIZIAYHzFuln5boxiOHq1WWBdbcTnLXwPqOrfxOxkLXtO2U5UAVBCU0nVcgyC61_YLZLXGR0Fmdi9M_fIp
2015-02-07 01:10:58 Registered socket 3 for persistent reuse.
2015-02-07 01:10:58 URI content encoding = `ISO-8859-1'
2015-02-07 01:10:58 Length: unspecified [text/html]
2015-02-07 01:10:58 Saving to: `index.html'
2015-02-07 01:10:58
2015-02-07 01:10:58 0K .......... ....... 17.6M=0.001s
2015-02-07 01:10:58
2015-02-07 01:10:58 2015-02-07 01:10:58 (17.6 MB/s) - `index.html' saved [18301]
2015-02-07 01:10:58
2015-02-07 01:10:58 Removing file due to --delete-after in main():
2015-02-07 01:10:58 Removing index.html.
real 0m0.350s
user 0m0.038s
sys 0m0.027s
xbian#xbian ~ $ time wget -d -v --no-check-certificate --delete-after -4 https://www.google.pt 2>&1 | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush(); }'
2015-02-07 01:11:01 Setting --verbose (verbose) to 1
2015-02-07 01:11:01 Setting --check-certificate (checkcertificate) to 0
2015-02-07 01:11:01 Setting --delete-after (deleteafter) to 1
2015-02-07 01:11:01 Setting --inet4-only (inet4only) to 1
2015-02-07 01:11:01 DEBUG output created by Wget 1.13.4 on linux-gnueabihf.
2015-02-07 01:11:01
2015-02-07 01:11:01 URI encoding = `UTF-8'
2015-02-07 01:11:01 --2015-02-07 01:11:01-- https://www.google.pt/
2015-02-07 01:11:06 Resolving www.google.pt (www.google.pt)... 213.30.5.25, 213.30.5.53, 213.30.5.38, ...
2015-02-07 01:11:06 Caching www.google.pt => 213.30.5.25 213.30.5.53 213.30.5.38 213.30.5.32 213.30.5.24 213.30.5.46 213.30.5.39 213.30.5.18 213.30.5.52 213.30.5.31 213.30.5.59 213.30.5.45
2015-02-07 01:11:06 Connecting to www.google.pt (www.google.pt)|213.30.5.25|:443... connected.
2015-02-07 01:11:06 Created socket 4.
2015-02-07 01:11:06 Releasing 0x00b53d48 (new refcount 1).
2015-02-07 01:11:06
2015-02-07 01:11:06 ---request begin---
2015-02-07 01:11:06 GET / HTTP/1.1
2015-02-07 01:11:06 User-Agent: Wget/1.13.4 (linux-gnueabihf)
2015-02-07 01:11:06 Accept: */*
2015-02-07 01:11:06 Host: www.google.pt
2015-02-07 01:11:06 Connection: Keep-Alive
2015-02-07 01:11:06
2015-02-07 01:11:06 ---request end---
2015-02-07 01:11:06 HTTP request sent, awaiting response...
2015-02-07 01:11:06 ---response begin---
2015-02-07 01:11:06 HTTP/1.1 200 OK
2015-02-07 01:11:06 Date: Sat, 07 Feb 2015 01:11:06 GMT
2015-02-07 01:11:06 Expires: -1
2015-02-07 01:11:06 Cache-Control: private, max-age=0
2015-02-07 01:11:06 Content-Type: text/html; charset=ISO-8859-1
2015-02-07 01:11:06 Set-Cookie: PREF=ID=579b1dd2360c9122:FF=0:TM=1423271466:LM=1423271466:S=9zOSotidcZWjJfXX; expires=Mon, 06-Feb-2017 01:11:06 GMT; path=/; domain=.google.pt
2015-02-07 01:11:06 Set-Cookie: NID=67=Jetj6llJijt09db9ekqGS6cBo3DE0CDqfQkp9Sh8xtLyYnNGU5zHoMED0whNkToP_w6mk6-oLTSRVdYIDekUEZH02oBYQPQhHmhpQzENI08zGNg9Jxn4EkXTIVApLCAG; expires=Sun, 09-Aug-2015 01:11:06 GMT; path=/; domain=.google.pt; HttpOnly
2015-02-07 01:11:06 P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
2015-02-07 01:11:06 Server: gws
2015-02-07 01:11:06 X-XSS-Protection: 1; mode=block
2015-02-07 01:11:06 X-Frame-Options: SAMEORIGIN
2015-02-07 01:11:06 Accept-Ranges: none
2015-02-07 01:11:06 Vary: Accept-Encoding
2015-02-07 01:11:06 Transfer-Encoding: chunked
2015-02-07 01:11:06
2015-02-07 01:11:06 ---response end---
2015-02-07 01:11:06 200 OK
2015-02-07 01:11:06 cdm: 1 2 3 4 5 6 7 8
2015-02-07 01:11:06 Stored cookie google.pt -1 (ANY) / <permanent> <insecure> [expiry 2017-02-06 01:11:06] PREF ID=579b1dd2360c9122:FF=0:TM=1423271466:LM=1423271466:S=9zOSotidcZWjJfXX
2015-02-07 01:11:06 cdm: 1 2 3 4 5 6 7 8
2015-02-07 01:11:06 Stored cookie google.pt -1 (ANY) / <permanent> <insecure> [expiry 2015-08-09 02:11:06] NID 67=Jetj6llJijt09db9ekqGS6cBo3DE0CDqfQkp9Sh8xtLyYnNGU5zHoMED0whNkToP_w6mk6-oLTSRVdYIDekUEZH02oBYQPQhHmhpQzENI08zGNg9Jxn4EkXTIVApLCAG
2015-02-07 01:11:06 Registered socket 4 for persistent reuse.
2015-02-07 01:11:06 URI content encoding = `ISO-8859-1'
2015-02-07 01:11:06 Length: unspecified [text/html]
2015-02-07 01:11:06 Saving to: `index.html'
2015-02-07 01:11:06
2015-02-07 01:11:06 0K .......... ....... 670K=0.03s
2015-02-07 01:11:06
2015-02-07 01:11:06 2015-02-07 01:11:06 (670 KB/s) - `index.html' saved [18319]
2015-02-07 01:11:06
2015-02-07 01:11:06 Removing file due to --delete-after in main():
2015-02-07 01:11:06 Removing index.html.
real 0m5.371s
user 0m4.083s
sys 0m0.280s
In curl, the difference does not seem that big...
xbian#xbian ~ $ curl -V
curl 7.26.0 (arm-unknown-linux-gnueabihf) libcurl/7.26.0 OpenSSL/1.0.1e zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
Features: Debug GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
xbian#xbian ~ $ time curl -s http:///www.google.pt > /dev/null
real 0m0.140s
user 0m0.056s
sys 0m0.034s
xbian#xbian ~ $ time curl -s https:///www.google.pt > /dev/null
real 0m0.294s
user 0m0.060s
sys 0m0.031s
There is some overhead related to setting up an SSL/TSL because a session key has to be established, however this tends to be negligible, so I doubt that it would the real reason, but one never knows.
How can I overcome this?
this, is not an issue with GNU Wget. I tried running your commands:
$ time wget -d -v --no-check-certificate --delete-after -4 http://www.google.pt 2>&1 | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush(); }'
$ time wget -d -v --no-check-certificate --delete-after -4 https://www.google.pt 2>&1 | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush(); }'
And ran them about 10 times each. The end result? The difference in time was minor, as is expected due to the SSL/TSL negotiation protocol. This aligns well with my expectations of GNU Wget's behaviour. So why did you see such a large difference?
Let's look at your output for the https version:
2015-02-07 01:11:01 Setting --verbose (verbose) to 1
2015-02-07 01:11:01 Setting --check-certificate (checkcertificate) to 0
2015-02-07 01:11:01 Setting --delete-after (deleteafter) to 1
2015-02-07 01:11:01 Setting --inet4-only (inet4only) to 1
2015-02-07 01:11:01 DEBUG output created by Wget 1.13.4 on linux-gnueabihf.
2015-02-07 01:11:01
2015-02-07 01:11:01 URI encoding = `UTF-8'
2015-02-07 01:11:01 --2015-02-07 01:11:01-- https://www.google.pt/
2015-02-07 01:11:06 Resolving www.google.pt (www.google.pt)... 213.30.5.25, 213.30.5.53, 213.30.5.38, ...
2015-02-07 01:11:06 Caching www.google.pt => 213.30.5.25 213.30.5.53 213.30.5.38 213.30.5.32 213.30.5.24 213.30.5.46 213.30.5.39 213.30.5.18 213.30.5.52 213.30.5.31 213.30.5.59 213.30.5.45
2015-02-07 01:11:06 Connecting to www.google.pt (www.google.pt)|213.30.5.25|:443... connected.
2015-02-07 01:11:06 Created socket 4.
2015-02-07 01:11:06 Releasing 0x00b53d48 (new refcount 1).
2015-02-07 01:11:06
2015-02-07 01:11:06 ---request begin---
2015-02-07 01:11:06 GET / HTTP/1.1
2015-02-07 01:11:06 User-Agent: Wget/1.13.4 (linux-gnueabihf)
2015-02-07 01:11:06 Accept: */*
2015-02-07 01:11:06 Host: www.google.pt
2015-02-07 01:11:06 Connection: Keep-Alive
2015-02-07 01:11:06
2015-02-07 01:11:06 ---request end---
I have only considered the output till the time Wget sends its first request. At this point, the SSL/TSL negotiation which everyone claims caused the dramatic rise in time, hasn't even begun. Yet, if you notice closely, the time taken is > 5s!
Hence, this behaviour that you notice is definitely not caused by the overhead of using HTTPS. So, what is it then? Again, look close at the output. Between which lines did the maximum time elapse?
2015-02-07 01:11:01 --2015-02-07 01:11:01-- https://www.google.pt/
2015-02-07 01:11:06 Resolving www.google.pt (www.google.pt)... 213.30.5.25, 213.30.5.53, 213.30.5.38, ...
That means it took Wget ~5 seconds to resolve the IP address from the domain names. However, DNS resolution is not something that Wget handles at all. Wget requests the system for getting the IP address. This can be seen in the file host.c:329:
static void
gethostbyname_with_timeout_callback (void *arg)
{
struct ghbnwt_context *ctx = (struct ghbnwt_context *)arg;
ctx->hptr = gethostbyname (ctx->host_name);
}
Hence, what really happened in your case was that your system took some extra time to resolve the hostname. This can happen for a wide variety of reasons. However, instead of running your test multiple times you fell for Hasty Generalization and simply assumed that Wget does HTTPS very slowly.
How can I overcome this?
You can't.
The difference between HTTP and HTTPS is that the latter is using SSL/TLS to secure the connection. SSL/TLS has significant overheads:
At start up, the client and server exchange certificates, etcetera to so that (at least) the client can verify that the server is not an imposter.
The start up negotiation entails a number of client <-> server message exchanges. If the TCP/IP level connection has appreciable latency, this will manifest as a noticeable delay.
Once the connection has been established, data that goes over the connection is encrypted on send and decrypted on receive.
I don't think there is any practical alternative to HTTPS is you want to talk securely to a regular, current generation web server. I don't think it changes with "next generation" HTTP; i.e. HTTP/2.
The only thing you can do to speed things up (HTTP/1.1 or HTTP/2) is to reuse a "persistent connection" for multiple GETs. The SSL/TLS negotiation only occurs when the connection is established. However, persistent connections don't help in the "single shot" use-case; e.g. when you use wget or curl to fetch one file.
After some tests and discussion with wget developers, I came to the conclusion that this was due to the gnutls library. If wget is compiled with openssl instead, the behaviour is much more like curl.
Your machine is probably trying an IPv6 DNS lookup and failing because it is not configured correctly. It falls back to IPv4 after a timeout and then the connection succeeds. If this is the problem, you'll need to either fix your IPv6 configuration or disable IPv6 completely.
To test this theory, use "ping6" to try to ping the host you're trying to connect to. My guess is that "ping6" will fail while "ping" will succeed.
How to Test:
greg#mycomputer:~$ ping6 www.google.pt
PING www.google.pt(ord30s26-in-x03.1e100.net) 56 data bytes
64 bytes from ord30s26-in-x03.1e100.net: icmp_seq=1 ttl=53 time=19.5 ms
64 bytes from ord30s26-in-x03.1e100.net: icmp_seq=2 ttl=53 time=18.3 ms
^C
--- www.google.pt ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 18.342/18.970/19.599/0.643 ms
greg#mycomputer:~$ ping www.google.pt
PING www.google.pt (216.58.192.227) 56(84) bytes of data.
64 bytes from ord30s26-in-f3.1e100.net (216.58.192.227): icmp_seq=1 ttl=54 time=19.0 ms
64 bytes from ord30s26-in-f3.1e100.net (216.58.192.227): icmp_seq=2 ttl=54 time=18.3 ms
Related
I see other responses from same server getting gzipped. I have a certain URL that is not getting gzipped. I can only think the problem may be the size of the content but I see no setting in IIS 8 that pertains to a size limit.
All static and dynamic and url and http compression is installed and enabled. I can't find any logs that contain any helpful info on why this URL is not getting compressed.
For example, a response that is gzipped from IIS. (See response header Content-Encoding: gzip)
curl 'http://....../small_json/' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: application/json, text/plain, */*' --compressed -D /tmp/headers.txt -o /dev/null; cat /tmp/headers.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 65382 100 65382 0 0 233k 0 --:--:-- --:--:-- --:--:-- 233k
HTTP/1.1 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Content-Encoding: gzip
Content-Language: en
Vary: Accept, Accept-Language, Cookie,Accept-Encoding
Server: Microsoft-IIS/8.5
X-Frame-Options: SAMEORIGIN
Date: Sun, 08 Apr 2018 01:50:54 GMT
Content-Length: 65382
Larger JSON response does not have Content-Encoding: gzip:
curl 'http://....../big_json/' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: application/json, text/plain, */*' --compressed -D /tmp/headers.txt -o /dev/null; cat /tmp/headers.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4755k 0 4755k 0 0 1018k 0 --:--:-- 0:00:04 --:--:-- 1373k
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Content-Language: en
Vary: Accept, Accept-Language, Cookie
Server: Microsoft-IIS/8.5
X-Frame-Options: SAMEORIGIN
Date: Sun, 08 Apr 2018 01:51:11 GMT
I've set compression settings to be very liberal:
FERB info for the compressed response:
FERB info for the non-compressed response:
I don't know if it still rellevant,
But you have to set dynamicCompressionLevel to a high value.
By default is 0.
I setup an IoT hub on Azure, and created a device called "ServerTemp". I generated an SAS token, and it seems to be accepted (I don't get 401). But I'm getting a 400 Bad Request.
Here is the request I'm sending via curl:
curl -v -H"Authorization:SharedAccessSignature sr=A794683.azure-devices.net&sig=<snip>" -H"Content-Type:application/json" -d'{"deviceId":"ServerTemp","temperature":70}' https://A479683.azure-devices.net/devices/ServerTemp/messages/events?api-version=2016-11-14
Request and Response (output from curl):
> POST /devices/ServerTemp/messages/events?api-version=2016-11-14 HTTP/1.1
> Host: A479683.azure-devices.net
> User-Agent: curl/7.47.0
> Accept: */*
> Authorization:SharedAccessSignature sr=A794683.azure-devices.net&sig=<snip>
> Content-Type:application/json
> Content-Length: 42
>
* upload completely sent off: 42 out of 42 bytes
< HTTP/1.1 400 Bad Request
< Content-Length: 151
< Content-Type: application/json; charset=utf-8
< Server: Microsoft-HTTPAPI/2.0
< iothub-errorcode: ArgumentInvalid
< Date: Sun, 15 Apr 2018 22:21:50 GMT
<
* Connection #0 to host A479683.azure-devices.net left intact
{"Message":"ErrorCode:ArgumentInvalid;BadRequest","ExceptionMessage":"Tracking ID:963189cb515345e69f94300655d3ca23-G:10-TimeStamp:04/15/2018 22:21:50"}
What am I doing wrong?
Make sure you add the expiry time &se= (as in &se=1555372034) when you form the SAS. It should be the very last parameter. That's the only way i can reproduce the HTTP 400 you're seeing (by omitting it). You should get a 204 No Content once you fix that.
The resource (&sr=) part also seems a bit off in your case, there's no device being specified. Use Device Explorer to generate a device SAS (or just to see how it should look like): Management > SAS Token.
SAS structure —
SharedAccessSignature sig={signature-string}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI}
$ curl -i https://poorlyfundedskynet.azure-devices.net/devices/dexter/messages/events?api-version=2016-11-14 \
-H "Authorization: SharedAccessSignature sr=poorlyfundedskynet.azure-devices.net%2fdevices%2fdexter&sig=RxxxxxxxtE%3d&se=1555372034" \
-H "Content-Type: application/json" \
-d'{"deviceId":"dexter","temperature":70}'
HTTP/1.1 204 No Content
Content-Length: 0
Server: Microsoft-HTTPAPI/2.0
Date: Sun, 15 Apr 2018 23:54:25 GMT
You can monitor ingress with Device Explorer or iothub-explorer:
Probably this would work as well: Azure IoT Extension for Azure CLI 2.0
I have a list of banners which are at this format:
Hostname: []
IP: xxx.xxx.xxx.xxx
Port: xx
HTTP/1.0 301 Moved Permanently
Location: /login.html
Content-Type: text/html
Device-Access-Level: 255
Content-Length: 3066
Cache-Control: max-age=7200, must-revalidate
I have used the following grep statement in order to grab the ip:
grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}"
What do I have to add to the statement in order to grab the port also? (while still getting the IP.).
Thank you for the answers..!
Why not use awk
awk '/IP:/ {ip=$2} /Port:/ {print ip,$2}' file
When it find line with IP: it stores the IP in variable ip
When it find port, print ip and port number.
Example
cat file
Hostname: []
IP: 163.248.1.20
Port: 843
HTTP/1.0 301 Moved Permanently
Location: /login.html
Content-Type: text/html
Device-Access-Level: 255
Content-Length: 3066
Cache-Control: max-age=7200, must-revalidat
awk '/IP:/ {ip=$2} /Port:/ {print ip,$2}' file
163.248.1.20 843
I'm running plesk 9.5.2 on Centos 5 and this apache version:
# apachectl -v
Server version: Apache/2.2.23 (Unix)
Server built: Sep 26 2012 00:02:01
Trying to serve some mp4 files, I've setup mime types correctly but I'm getting this weird behaviour:
# curl -I -s iated.org/inted/video_data/promo.mp4
HTTP/1.1 200 OK
Date: Mon, 29 Sep 2014 16:09:48 GMT
Server: Apache/2.2.23 (CentOS)
Last-Modified: Sun, 28 Sep 2014 09:44:30 GMT
ETag: "21f0070-13079ae-5041cff289b80"
Accept-Ranges: bytes
Content-Length: 19954094
X-Powered-By: PleskLin
Content-Type: video/mp4
Which is OK. Thats what IE10 is requesting. However Firefox and Chrome are doing something fancier and set Content-Range bytes:0- like:
# curl -I -H "Range: bytes=0-" -s iated.org/inted/video_data/promo.mp4
Than returns nothing. Void.
Range requests works otherwise well:
# curl -I -H "Range: bytes=1-" -s iated.org/inted/video_data/promo.mp4
HTTP/1.1 206 Partial Content
Date: Mon, 29 Sep 2014 16:08:41 GMT
Server: Apache/2.2.23 (CentOS)
Last-Modified: Sun, 28 Sep 2014 09:44:30 GMT
ETag: "21f0070-13079ae-5041cff289b80"
Accept-Ranges: bytes
Content-Length: 19954093
X-Powered-By: PleskLin
Content-Range: bytes 1-19954093/19954094
Content-Type: video/mp4
Any idea why apache is panicking with Range: bytes 0- ??
Updating Apache to 2.2.27 solved the issue.
Please i want to use the cURL command in linux OS to return as a result just the http response code which is "200" if it is okey
am using that command:
curl -I -L domain.com
but this is returning for me a full text like this
HTTP/1.1 **200** OK
Date: Thu, 27 Feb 2014 19:32:45 GMT
Server: Apache/2.2.25 (Unix) mod_ssl/2.2.25 OpenSSL/1.0.0-fips mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.4.19
X-Powered-By: PHP/5.4.19
Set-Cookie: PHPSESSID=bb8aabf4a5419dbd20d56b285199f865; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Vary: Accept-Encoding,User-Agent
Content-Type: text/html
so please i just need the response code and not the whole text
Regards
curl -s -o out.html -w '%{http_code}' http://www.example.com
Running the following will supress the output so you won't have any cleanup.
curl -s -o /dev/null -w '%{http_code}' 127.0.0.1:80
Example above uses localhost and port 80