IIS not gzipping large dynamic JSON response - iis

I see other responses from same server getting gzipped. I have a certain URL that is not getting gzipped. I can only think the problem may be the size of the content but I see no setting in IIS 8 that pertains to a size limit.
All static and dynamic and url and http compression is installed and enabled. I can't find any logs that contain any helpful info on why this URL is not getting compressed.
For example, a response that is gzipped from IIS. (See response header Content-Encoding: gzip)
curl 'http://....../small_json/' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: application/json, text/plain, */*' --compressed -D /tmp/headers.txt -o /dev/null; cat /tmp/headers.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 65382 100 65382 0 0 233k 0 --:--:-- --:--:-- --:--:-- 233k
HTTP/1.1 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Content-Encoding: gzip
Content-Language: en
Vary: Accept, Accept-Language, Cookie,Accept-Encoding
Server: Microsoft-IIS/8.5
X-Frame-Options: SAMEORIGIN
Date: Sun, 08 Apr 2018 01:50:54 GMT
Content-Length: 65382
Larger JSON response does not have Content-Encoding: gzip:
curl 'http://....../big_json/' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: application/json, text/plain, */*' --compressed -D /tmp/headers.txt -o /dev/null; cat /tmp/headers.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4755k 0 4755k 0 0 1018k 0 --:--:-- 0:00:04 --:--:-- 1373k
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Content-Language: en
Vary: Accept, Accept-Language, Cookie
Server: Microsoft-IIS/8.5
X-Frame-Options: SAMEORIGIN
Date: Sun, 08 Apr 2018 01:51:11 GMT
I've set compression settings to be very liberal:
FERB info for the compressed response:
FERB info for the non-compressed response:

I don't know if it still rellevant,
But you have to set dynamicCompressionLevel to a high value.
By default is 0.

Related

How do I send an event to Azure IoT Hub via HTTPS

I setup an IoT hub on Azure, and created a device called "ServerTemp". I generated an SAS token, and it seems to be accepted (I don't get 401). But I'm getting a 400 Bad Request.
Here is the request I'm sending via curl:
curl -v -H"Authorization:SharedAccessSignature sr=A794683.azure-devices.net&sig=<snip>" -H"Content-Type:application/json" -d'{"deviceId":"ServerTemp","temperature":70}' https://A479683.azure-devices.net/devices/ServerTemp/messages/events?api-version=2016-11-14
Request and Response (output from curl):
> POST /devices/ServerTemp/messages/events?api-version=2016-11-14 HTTP/1.1
> Host: A479683.azure-devices.net
> User-Agent: curl/7.47.0
> Accept: */*
> Authorization:SharedAccessSignature sr=A794683.azure-devices.net&sig=<snip>
> Content-Type:application/json
> Content-Length: 42
>
* upload completely sent off: 42 out of 42 bytes
< HTTP/1.1 400 Bad Request
< Content-Length: 151
< Content-Type: application/json; charset=utf-8
< Server: Microsoft-HTTPAPI/2.0
< iothub-errorcode: ArgumentInvalid
< Date: Sun, 15 Apr 2018 22:21:50 GMT
<
* Connection #0 to host A479683.azure-devices.net left intact
{"Message":"ErrorCode:ArgumentInvalid;BadRequest","ExceptionMessage":"Tracking ID:963189cb515345e69f94300655d3ca23-G:10-TimeStamp:04/15/2018 22:21:50"}
What am I doing wrong?
Make sure you add the expiry time &se= (as in &se=1555372034) when you form the SAS. It should be the very last parameter. That's the only way i can reproduce the HTTP 400 you're seeing (by omitting it). You should get a 204 No Content once you fix that.
The resource (&sr=) part also seems a bit off in your case, there's no device being specified. Use Device Explorer to generate a device SAS (or just to see how it should look like): Management > SAS Token.
SAS structure —
SharedAccessSignature sig={signature-string}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI}
$ curl -i https://poorlyfundedskynet.azure-devices.net/devices/dexter/messages/events?api-version=2016-11-14 \
-H "Authorization: SharedAccessSignature sr=poorlyfundedskynet.azure-devices.net%2fdevices%2fdexter&sig=RxxxxxxxtE%3d&se=1555372034" \
-H "Content-Type: application/json" \
-d'{"deviceId":"dexter","temperature":70}'
HTTP/1.1 204 No Content
Content-Length: 0
Server: Microsoft-HTTPAPI/2.0
Date: Sun, 15 Apr 2018 23:54:25 GMT
You can monitor ingress with Device Explorer or iothub-explorer:
Probably this would work as well: Azure IoT Extension for Azure CLI 2.0

Why would \u0000 (null characters) end up in an HTTP response

I'm using cURL (or Node request) and see \u0000 scattered throughout the response and don't understand why they appear.
I figured out I can remove them with response.body.replace(/\u0000/g, '') but would like to understand the source of it to see if there's maybe a better way.
I've played around with request headers but don't understand where these characters come from.
Furthermore, when I browse the site in my browser, I don't see them, and copying the request (chrome's Copy as cURL option) into a terminal I do see them.
Is there some request header or some other way I should be removing/detecting these unicode characters?
Example request headers using Node request:
{ 'Content-Type': '*/*; charset=utf-8',
accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8'
}
Example cURL (and my ~/.curlrc is empty):
curl 'http://example.com' -H 'Pragma: no-cache' -H 'DNT: 1' \
-H 'Accept-Encoding: gzip, deflate' -H 'Accept-Language: en-US,en;q=0.8,la;q=0.6' \
-H 'Upgrade-Insecure-Requests: 1' \
-H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36' \
-H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8' \
-H 'Cache-Control: no-cache' -H 'Connection: keep-alive' --compressed
The response headers:
HTTP/1.1 200 OK
Server: nginx
Date: Sat, 04 Nov 2017 14:50:13 GMT
Content-Type: text/plain
Last-Modified: Thu, 02 Nov 2017 19:02:13 GMT
Transfer-Encoding: chunked
Connection: close
ETag: W/"59fc6be1-41e2"
Content-Encoding: gzip

Use date calculations in Linux bash curl to get day of week

I have a cronjob that uses curl to send http post to my home-assistant.io server that in turn uses google_say to make my Google Home tell people to start getting ready in the morning... for a bit of fun. :)
It works great but when trying to add some dynamic content such as saying the day of the week, I'm struggling with the construct of using date within curl. I would also like it determine the number of days until the weekend. I have tried the following:
"message": "Its "'"$(date +%A)"'" morning and x days until the weekend. Time to get ready."
but get an error saying:
<html><head><title>500 Internal Server Error</title></head><body><h1>500 Internal Server Error</h1>Server got itself in trouble</body></html>
Am I wrong in thinking that "'"$(date +%A)"'" should work in this situation? Also I'd like to add how many days until the weekend, probably something like:
6 - $(date +%u)
I appreciate that I could do this very easily by doing some calculations before curl and referencing those but would like to do it in a single line if possible. The line is referenced from an .sh file at present, not a single line in cron.
This is the full line as requested:
curl -X POST -H "x-ha-access: apiPass" -H "Content-Type: application/json" -d '{"entity_id": "media_player.Living_room_Home", "message": "Its "'"$(date +%A)"'" morning and 2 days until the weekend. Time to get ready."}' http://ipAddr:8123/api/services/tts/google_say?api_password=apiPass
Thanks.
It works perfectly fine with this line:
curl --trace-ascii 1 -X POST -H "x-ha-access: apiPass" -H "Content-Type: application/json" -d '{"entity_id": "media_player.Living_room_Home", "message": "Its '$(date +%A)' morning and 2 days until the weekend. Time to get ready."}'
With result:
== Info: Trying ::1...
== Info: TCP_NODELAY set
== Info: Connected to localhost (::1) port 80 (#0)
=> Send header, 197 bytes (0xc5)
0000: POST /api/services/tts/google_say?api_password=apiPass HTTP/1.1
0041: Host: localhost
0052: User-Agent: curl/7.50.3
006b: Accept: */*
0078: x-ha-access: apiPass
008e: Content-Type: application/json
00ae: Content-Length: 130
00c3:
=> Send data, 130 bytes (0x82)
0000: {"entity_id": "media_player.Living_room_Home", "message": "Its T
0040: uesday morning and 2 days until the weekend. Time to get ready.
0080: "}
== Info: upload completely sent off: 130 out of 130 bytes
<= Recv header, 24 bytes (0x18)
0000: HTTP/1.1 404 Not Found
<= Recv header, 28 bytes (0x1c)
0000: Server: Microsoft-IIS/10.0
<= Recv header, 37 bytes (0x25)
0000: Date: Tue, 07 Nov 2017 21:12:21 GMT
<= Recv header, 19 bytes (0x13)
0000: Content-Length: 0
<= Recv header, 2 bytes (0x2)
0000:
== Info: Curl_http_done: called premature == 0
== Info: Connection #0 to host localhost left intact
Would this help?
echo $(( $(date -d 'next saturday' +%j) - $(date +%j) - 1 )) days until the weekend
The -d option in GNU date lets you provide a surprisingly flexible description of the date you want.

WGET - HTTPS vs HTTP = HTTPS Slower

I am trying to do some tests using the wget and I noticed that HTTPS pages take significant to load in wget than http for the same server.
This does not seem related to any network difference. Before the name resolution wget takes about 5 extra seconds.
Can anyone help? How can I overcome this? I was looking to using wget with -p and -H options to evaluate network performance when I noticed this.
xbian#xbian ~ $ wget -V
GNU Wget 1.13.4 built on linux-gnueabihf.
+digest +https +ipv6 +iri +large-file +nls -ntlm +opie +ssl/gnutls
Wgetrc:
/etc/wgetrc (system)
Locale: /usr/share/locale
Compile: gcc -DHAVE_CONFIG_H -DSYSTEM_WGETRC="/etc/wgetrc"
-DLOCALEDIR="/usr/share/locale" -I. -I../lib -I../lib
-D_FORTIFY_SOURCE=2 -Iyes/include -g -O2 -fstack-protector
--param=ssp-buffer-size=4 -Wformat -Werror=format-security
-DNO_SSLv2 -D_FILE_OFFSET_BITS=64 -g -Wall
Link: gcc -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat
-Werror=format-security -DNO_SSLv2 -D_FILE_OFFSET_BITS=64 -g -Wall
-Wl,-z,relro -Lyes/lib -lgnutls -lgcrypt -lgpg-error -lz -lidn -lrt
ftp-opie.o gnutls.o ../lib/libgnu.a
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<http://www.gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Originally written by Hrvoje Niksic <hniksic#xemacs.org>.
Please send bug reports and questions to <bug-wget#gnu.org>.
xbian#xbian ~ $ time wget -d -v --no-check-certificate --delete-after -4 http://www.google.pt 2>&1 | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush(); }'
2015-02-07 01:10:57 Setting --verbose (verbose) to 1
2015-02-07 01:10:57 Setting --check-certificate (checkcertificate) to 0
2015-02-07 01:10:57 Setting --delete-after (deleteafter) to 1
2015-02-07 01:10:57 Setting --inet4-only (inet4only) to 1
2015-02-07 01:10:57 DEBUG output created by Wget 1.13.4 on linux-gnueabihf.
2015-02-07 01:10:57
2015-02-07 01:10:57 URI encoding = `UTF-8'
2015-02-07 01:10:57 --2015-02-07 01:10:57-- http://www.google.pt/
2015-02-07 01:10:57 Resolving www.google.pt (www.google.pt)... 213.30.5.52, 213.30.5.24, 213.30.5.18, ...
2015-02-07 01:10:57 Caching www.google.pt => 213.30.5.52 213.30.5.24 213.30.5.18 213.30.5.25 213.30.5.59 213.30.5.31 213.30.5.45 213.30.5.46 213.30.5.39 213.30.5.53 213.30.5.32 213.30.5.38
2015-02-07 01:10:57 Connecting to www.google.pt (www.google.pt)|213.30.5.52|:80... connected.
2015-02-07 01:10:57 Created socket 3.
2015-02-07 01:10:57 Releasing 0x003b8040 (new refcount 1).
2015-02-07 01:10:57
2015-02-07 01:10:57 ---request begin---
2015-02-07 01:10:57 GET / HTTP/1.1
2015-02-07 01:10:57 User-Agent: Wget/1.13.4 (linux-gnueabihf)
2015-02-07 01:10:57 Accept: */*
2015-02-07 01:10:57 Host: www.google.pt
2015-02-07 01:10:57 Connection: Keep-Alive
2015-02-07 01:10:57
2015-02-07 01:10:57 ---request end---
2015-02-07 01:10:58 HTTP request sent, awaiting response...
2015-02-07 01:10:58 ---response begin---
2015-02-07 01:10:58 HTTP/1.1 200 OK
2015-02-07 01:10:58 Date: Sat, 07 Feb 2015 01:10:58 GMT
2015-02-07 01:10:58 Expires: -1
2015-02-07 01:10:58 Cache-Control: private, max-age=0
2015-02-07 01:10:58 Content-Type: text/html; charset=ISO-8859-1
2015-02-07 01:10:58 Set-Cookie: PREF=ID=98608883e4031983:FF=0:TM=1423271458:LM=1423271458:S=BnwaLDxFbjCUyPnF; expires=Mon, 06-Feb-2017 01:10:58 GMT; path=/; domain=.google.pt
2015-02-07 01:10:58 Set-Cookie: NID=67=AkXpY2nJPDDcH7xKJkslxdCtflnhOZJiNwZdu4YBAIc2FnjIZIAYHzFuln5boxiOHq1WWBdbcTnLXwPqOrfxOxkLXtO2U5UAVBCU0nVcgyC61_YLZLXGR0Fmdi9M_fIp; expires=Sun, 09-Aug-2015 01:10:58 GMT; path=/; domain=.google.pt; HttpOnly
2015-02-07 01:10:58 P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
2015-02-07 01:10:58 Server: gws
2015-02-07 01:10:58 X-XSS-Protection: 1; mode=block
2015-02-07 01:10:58 X-Frame-Options: SAMEORIGIN
2015-02-07 01:10:58 Alternate-Protocol: 80:quic,p=0.02
2015-02-07 01:10:58 Accept-Ranges: none
2015-02-07 01:10:58 Vary: Accept-Encoding
2015-02-07 01:10:58 Transfer-Encoding: chunked
2015-02-07 01:10:58
2015-02-07 01:10:58 ---response end---
2015-02-07 01:10:58 200 OK
2015-02-07 01:10:58 cdm: 1 2 3 4 5 6 7 8
2015-02-07 01:10:58 Stored cookie google.pt -1 (ANY) / <permanent> <insecure> [expiry 2017-02-06 01:10:58] PREF ID=98608883e4031983:FF=0:TM=1423271458:LM=1423271458:S=BnwaLDxFbjCUyPnF
2015-02-07 01:10:58 cdm: 1 2 3 4 5 6 7 8
2015-02-07 01:10:58 Stored cookie google.pt -1 (ANY) / <permanent> <insecure> [expiry 2015-08-09 02:10:58] NID 67=AkXpY2nJPDDcH7xKJkslxdCtflnhOZJiNwZdu4YBAIc2FnjIZIAYHzFuln5boxiOHq1WWBdbcTnLXwPqOrfxOxkLXtO2U5UAVBCU0nVcgyC61_YLZLXGR0Fmdi9M_fIp
2015-02-07 01:10:58 Registered socket 3 for persistent reuse.
2015-02-07 01:10:58 URI content encoding = `ISO-8859-1'
2015-02-07 01:10:58 Length: unspecified [text/html]
2015-02-07 01:10:58 Saving to: `index.html'
2015-02-07 01:10:58
2015-02-07 01:10:58 0K .......... ....... 17.6M=0.001s
2015-02-07 01:10:58
2015-02-07 01:10:58 2015-02-07 01:10:58 (17.6 MB/s) - `index.html' saved [18301]
2015-02-07 01:10:58
2015-02-07 01:10:58 Removing file due to --delete-after in main():
2015-02-07 01:10:58 Removing index.html.
real 0m0.350s
user 0m0.038s
sys 0m0.027s
xbian#xbian ~ $ time wget -d -v --no-check-certificate --delete-after -4 https://www.google.pt 2>&1 | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush(); }'
2015-02-07 01:11:01 Setting --verbose (verbose) to 1
2015-02-07 01:11:01 Setting --check-certificate (checkcertificate) to 0
2015-02-07 01:11:01 Setting --delete-after (deleteafter) to 1
2015-02-07 01:11:01 Setting --inet4-only (inet4only) to 1
2015-02-07 01:11:01 DEBUG output created by Wget 1.13.4 on linux-gnueabihf.
2015-02-07 01:11:01
2015-02-07 01:11:01 URI encoding = `UTF-8'
2015-02-07 01:11:01 --2015-02-07 01:11:01-- https://www.google.pt/
2015-02-07 01:11:06 Resolving www.google.pt (www.google.pt)... 213.30.5.25, 213.30.5.53, 213.30.5.38, ...
2015-02-07 01:11:06 Caching www.google.pt => 213.30.5.25 213.30.5.53 213.30.5.38 213.30.5.32 213.30.5.24 213.30.5.46 213.30.5.39 213.30.5.18 213.30.5.52 213.30.5.31 213.30.5.59 213.30.5.45
2015-02-07 01:11:06 Connecting to www.google.pt (www.google.pt)|213.30.5.25|:443... connected.
2015-02-07 01:11:06 Created socket 4.
2015-02-07 01:11:06 Releasing 0x00b53d48 (new refcount 1).
2015-02-07 01:11:06
2015-02-07 01:11:06 ---request begin---
2015-02-07 01:11:06 GET / HTTP/1.1
2015-02-07 01:11:06 User-Agent: Wget/1.13.4 (linux-gnueabihf)
2015-02-07 01:11:06 Accept: */*
2015-02-07 01:11:06 Host: www.google.pt
2015-02-07 01:11:06 Connection: Keep-Alive
2015-02-07 01:11:06
2015-02-07 01:11:06 ---request end---
2015-02-07 01:11:06 HTTP request sent, awaiting response...
2015-02-07 01:11:06 ---response begin---
2015-02-07 01:11:06 HTTP/1.1 200 OK
2015-02-07 01:11:06 Date: Sat, 07 Feb 2015 01:11:06 GMT
2015-02-07 01:11:06 Expires: -1
2015-02-07 01:11:06 Cache-Control: private, max-age=0
2015-02-07 01:11:06 Content-Type: text/html; charset=ISO-8859-1
2015-02-07 01:11:06 Set-Cookie: PREF=ID=579b1dd2360c9122:FF=0:TM=1423271466:LM=1423271466:S=9zOSotidcZWjJfXX; expires=Mon, 06-Feb-2017 01:11:06 GMT; path=/; domain=.google.pt
2015-02-07 01:11:06 Set-Cookie: NID=67=Jetj6llJijt09db9ekqGS6cBo3DE0CDqfQkp9Sh8xtLyYnNGU5zHoMED0whNkToP_w6mk6-oLTSRVdYIDekUEZH02oBYQPQhHmhpQzENI08zGNg9Jxn4EkXTIVApLCAG; expires=Sun, 09-Aug-2015 01:11:06 GMT; path=/; domain=.google.pt; HttpOnly
2015-02-07 01:11:06 P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
2015-02-07 01:11:06 Server: gws
2015-02-07 01:11:06 X-XSS-Protection: 1; mode=block
2015-02-07 01:11:06 X-Frame-Options: SAMEORIGIN
2015-02-07 01:11:06 Accept-Ranges: none
2015-02-07 01:11:06 Vary: Accept-Encoding
2015-02-07 01:11:06 Transfer-Encoding: chunked
2015-02-07 01:11:06
2015-02-07 01:11:06 ---response end---
2015-02-07 01:11:06 200 OK
2015-02-07 01:11:06 cdm: 1 2 3 4 5 6 7 8
2015-02-07 01:11:06 Stored cookie google.pt -1 (ANY) / <permanent> <insecure> [expiry 2017-02-06 01:11:06] PREF ID=579b1dd2360c9122:FF=0:TM=1423271466:LM=1423271466:S=9zOSotidcZWjJfXX
2015-02-07 01:11:06 cdm: 1 2 3 4 5 6 7 8
2015-02-07 01:11:06 Stored cookie google.pt -1 (ANY) / <permanent> <insecure> [expiry 2015-08-09 02:11:06] NID 67=Jetj6llJijt09db9ekqGS6cBo3DE0CDqfQkp9Sh8xtLyYnNGU5zHoMED0whNkToP_w6mk6-oLTSRVdYIDekUEZH02oBYQPQhHmhpQzENI08zGNg9Jxn4EkXTIVApLCAG
2015-02-07 01:11:06 Registered socket 4 for persistent reuse.
2015-02-07 01:11:06 URI content encoding = `ISO-8859-1'
2015-02-07 01:11:06 Length: unspecified [text/html]
2015-02-07 01:11:06 Saving to: `index.html'
2015-02-07 01:11:06
2015-02-07 01:11:06 0K .......... ....... 670K=0.03s
2015-02-07 01:11:06
2015-02-07 01:11:06 2015-02-07 01:11:06 (670 KB/s) - `index.html' saved [18319]
2015-02-07 01:11:06
2015-02-07 01:11:06 Removing file due to --delete-after in main():
2015-02-07 01:11:06 Removing index.html.
real 0m5.371s
user 0m4.083s
sys 0m0.280s
In curl, the difference does not seem that big...
xbian#xbian ~ $ curl -V
curl 7.26.0 (arm-unknown-linux-gnueabihf) libcurl/7.26.0 OpenSSL/1.0.1e zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
Features: Debug GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
xbian#xbian ~ $ time curl -s http:///www.google.pt > /dev/null
real 0m0.140s
user 0m0.056s
sys 0m0.034s
xbian#xbian ~ $ time curl -s https:///www.google.pt > /dev/null
real 0m0.294s
user 0m0.060s
sys 0m0.031s
There is some overhead related to setting up an SSL/TSL because a session key has to be established, however this tends to be negligible, so I doubt that it would the real reason, but one never knows.
How can I overcome this?
this, is not an issue with GNU Wget. I tried running your commands:
$ time wget -d -v --no-check-certificate --delete-after -4 http://www.google.pt 2>&1 | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush(); }'
$ time wget -d -v --no-check-certificate --delete-after -4 https://www.google.pt 2>&1 | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush(); }'
And ran them about 10 times each. The end result? The difference in time was minor, as is expected due to the SSL/TSL negotiation protocol. This aligns well with my expectations of GNU Wget's behaviour. So why did you see such a large difference?
Let's look at your output for the https version:
2015-02-07 01:11:01 Setting --verbose (verbose) to 1
2015-02-07 01:11:01 Setting --check-certificate (checkcertificate) to 0
2015-02-07 01:11:01 Setting --delete-after (deleteafter) to 1
2015-02-07 01:11:01 Setting --inet4-only (inet4only) to 1
2015-02-07 01:11:01 DEBUG output created by Wget 1.13.4 on linux-gnueabihf.
2015-02-07 01:11:01
2015-02-07 01:11:01 URI encoding = `UTF-8'
2015-02-07 01:11:01 --2015-02-07 01:11:01-- https://www.google.pt/
2015-02-07 01:11:06 Resolving www.google.pt (www.google.pt)... 213.30.5.25, 213.30.5.53, 213.30.5.38, ...
2015-02-07 01:11:06 Caching www.google.pt => 213.30.5.25 213.30.5.53 213.30.5.38 213.30.5.32 213.30.5.24 213.30.5.46 213.30.5.39 213.30.5.18 213.30.5.52 213.30.5.31 213.30.5.59 213.30.5.45
2015-02-07 01:11:06 Connecting to www.google.pt (www.google.pt)|213.30.5.25|:443... connected.
2015-02-07 01:11:06 Created socket 4.
2015-02-07 01:11:06 Releasing 0x00b53d48 (new refcount 1).
2015-02-07 01:11:06
2015-02-07 01:11:06 ---request begin---
2015-02-07 01:11:06 GET / HTTP/1.1
2015-02-07 01:11:06 User-Agent: Wget/1.13.4 (linux-gnueabihf)
2015-02-07 01:11:06 Accept: */*
2015-02-07 01:11:06 Host: www.google.pt
2015-02-07 01:11:06 Connection: Keep-Alive
2015-02-07 01:11:06
2015-02-07 01:11:06 ---request end---
I have only considered the output till the time Wget sends its first request. At this point, the SSL/TSL negotiation which everyone claims caused the dramatic rise in time, hasn't even begun. Yet, if you notice closely, the time taken is > 5s!
Hence, this behaviour that you notice is definitely not caused by the overhead of using HTTPS. So, what is it then? Again, look close at the output. Between which lines did the maximum time elapse?
2015-02-07 01:11:01 --2015-02-07 01:11:01-- https://www.google.pt/
2015-02-07 01:11:06 Resolving www.google.pt (www.google.pt)... 213.30.5.25, 213.30.5.53, 213.30.5.38, ...
That means it took Wget ~5 seconds to resolve the IP address from the domain names. However, DNS resolution is not something that Wget handles at all. Wget requests the system for getting the IP address. This can be seen in the file host.c:329:
static void
gethostbyname_with_timeout_callback (void *arg)
{
struct ghbnwt_context *ctx = (struct ghbnwt_context *)arg;
ctx->hptr = gethostbyname (ctx->host_name);
}
Hence, what really happened in your case was that your system took some extra time to resolve the hostname. This can happen for a wide variety of reasons. However, instead of running your test multiple times you fell for Hasty Generalization and simply assumed that Wget does HTTPS very slowly.
How can I overcome this?
You can't.
The difference between HTTP and HTTPS is that the latter is using SSL/TLS to secure the connection. SSL/TLS has significant overheads:
At start up, the client and server exchange certificates, etcetera to so that (at least) the client can verify that the server is not an imposter.
The start up negotiation entails a number of client <-> server message exchanges. If the TCP/IP level connection has appreciable latency, this will manifest as a noticeable delay.
Once the connection has been established, data that goes over the connection is encrypted on send and decrypted on receive.
I don't think there is any practical alternative to HTTPS is you want to talk securely to a regular, current generation web server. I don't think it changes with "next generation" HTTP; i.e. HTTP/2.
The only thing you can do to speed things up (HTTP/1.1 or HTTP/2) is to reuse a "persistent connection" for multiple GETs. The SSL/TLS negotiation only occurs when the connection is established. However, persistent connections don't help in the "single shot" use-case; e.g. when you use wget or curl to fetch one file.
After some tests and discussion with wget developers, I came to the conclusion that this was due to the gnutls library. If wget is compiled with openssl instead, the behaviour is much more like curl.
Your machine is probably trying an IPv6 DNS lookup and failing because it is not configured correctly. It falls back to IPv4 after a timeout and then the connection succeeds. If this is the problem, you'll need to either fix your IPv6 configuration or disable IPv6 completely.
To test this theory, use "ping6" to try to ping the host you're trying to connect to. My guess is that "ping6" will fail while "ping" will succeed.
How to Test:
greg#mycomputer:~$ ping6 www.google.pt
PING www.google.pt(ord30s26-in-x03.1e100.net) 56 data bytes
64 bytes from ord30s26-in-x03.1e100.net: icmp_seq=1 ttl=53 time=19.5 ms
64 bytes from ord30s26-in-x03.1e100.net: icmp_seq=2 ttl=53 time=18.3 ms
^C
--- www.google.pt ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 18.342/18.970/19.599/0.643 ms
greg#mycomputer:~$ ping www.google.pt
PING www.google.pt (216.58.192.227) 56(84) bytes of data.
64 bytes from ord30s26-in-f3.1e100.net (216.58.192.227): icmp_seq=1 ttl=54 time=19.0 ms
64 bytes from ord30s26-in-f3.1e100.net (216.58.192.227): icmp_seq=2 ttl=54 time=18.3 ms

cURL Command in linux to return http response code

Please i want to use the cURL command in linux OS to return as a result just the http response code which is "200" if it is okey
am using that command:
curl -I -L domain.com
but this is returning for me a full text like this
HTTP/1.1 **200** OK
Date: Thu, 27 Feb 2014 19:32:45 GMT
Server: Apache/2.2.25 (Unix) mod_ssl/2.2.25 OpenSSL/1.0.0-fips mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.4.19
X-Powered-By: PHP/5.4.19
Set-Cookie: PHPSESSID=bb8aabf4a5419dbd20d56b285199f865; path=/
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Vary: Accept-Encoding,User-Agent
Content-Type: text/html
so please i just need the response code and not the whole text
Regards
curl -s -o out.html -w '%{http_code}' http://www.example.com
Running the following will supress the output so you won't have any cleanup.
curl -s -o /dev/null -w '%{http_code}' 127.0.0.1:80
Example above uses localhost and port 80

Resources