Reverse proxy partially works on docker swarm - linux

I setup a docker swarm with 3 nodes :
s1 : manager + worker
s2 : worker
s3 : worker
I deployed a nginx as a reverse proxy to a docker swarm service on each node with publishing port as mode=host to get the real ip. Nginx works "fine", i'am able to serve static content, use over https, etc ...
The part which doesn't work is the reverse_proxy :
if the nginx and the service are on the same node, everything works
if the nginx and the service aren't one the same node, i can only GET / because others requests ( like /css/style.css ) will fails with 499 ( from nginx point )
nginx network is an overlay network swarm-scopped and ip forwarding is enabled.
Here is my nginx configuration :
server {
listen 80;
server_name service.foo.bar;
location / {
proxy_pass http://service:80;
}
}
server {
listen 443 ssl;
server_name service.foo.bar;
ssl_certificate /ssl/service.foo.bar/fullchain.pem;
ssl_certificate_key /ssl/service.foo.bar/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://service:80;
}
}
Here is how i deployed my nginx :
docker service create --name nginx --mount /etc/nginx/nginx.conf:/etc/nginx/nginx.conf --mode=global --publish mode=host,published=80,target=80 --publish mode=host,published=443,target=443 --network nginx nginx
If i curl the node who hosts the service :
* TCP_NODELAY set
* Connected to service.foo.bar port 80 (#0)
> GET /app/polyfills.d92dcdb0a986e964fec8.js HTTP/1.1
> Host: service.foo.bar
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.21.6
< Date: Mon, 13 Jun 2022 15:38:36 GMT
< Content-Type: application/javascript
< Content-Length: 257750
< Connection: keep-alive
< cache-control: public, immutable, max-age=604800
< expires: Mon, 20 Jun 2022 15:38:36 GMT
< permissions-policy: accelerometer=(), ambient-light-sensor=(), autoplay=(), camera=(), encrypted-media=(), fullscreen=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), midi=(), payment=(), picture-in-picture=(), sync-xhr=(self "https://haveibeenpwned.com" "https://2fa.directory"), usb=(), vr=()
< x-content-type-options: nosniff
< x-frame-options: SAMEORIGIN
< referrer-policy: same-origin
< x-xss-protection: 0
<
/*! For license information please see polyfills.d92dcdb0a986e964fec8.js.LICENSE.txt */
[...]
If i curl a node which doesn't host the service :
* TCP_NODELAY set
* Connected to service.foo.bar port 80 (#0)
> GET /app/polyfills.d92dcdb0a986e964fec8.js HTTP/1.1
> Host: service.foo.bar
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.21.6
< Date: Mon, 13 Jun 2022 15:38:25 GMT
< Content-Type: application/javascript
< Content-Length: 257750
< Connection: keep-alive
< cache-control: public, immutable, max-age=604800
< expires: Mon, 20 Jun 2022 15:38:25 GMT
< permissions-policy: accelerometer=(), ambient-light-sensor=(), autoplay=(), camera=(), encrypted-media=(), fullscreen=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), midi=(), payment=(), picture-in-picture=(), sync-xhr=(self "https://haveibeenpwned.com" "https://2fa.directory"), usb=(), vr=()
< x-content-type-options: nosniff
< x-frame-options: SAMEORIGIN
< referrer-policy: same-origin
< x-xss-protection: 0
<
* transfer closed with 257750 bytes remaining to read
* Closing connection 0
curl: (18) transfer closed with 257750 bytes remaining to read
nginx log say :
nginx.0.scembp2e9iqp#s3 | 2022/06/13 15:38:36 [warn] 23#23: *114 an upstream response is buffered to a temporary file /var/cache/nginx/proxy_temp/5/00/0000000005 while reading upstream, client: #ip, server: service.foo.bar, request: "GET /app/polyfills.d92dcdb0a986e964fec8.js HTTP/1.1", upstream: "http://10.0.4.56:80/app/polyfills.d92dcdb0a986e964fec8.js", host: "service.foo.bar"
My nodes are connected each others over wireguard, this is my routing table :
default via #ip dev ens3
#ip dev ens3 scope link
10.252.1.0/24 dev wg0 proto kernel scope link src 10.252.1.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.19.0.0/16 dev docker_gwbridge proto kernel scope link src 172.19.0.1
Here is my wireguard configuration :
[Interface]
Address = 10.252.1.1/24
ListenPort = 51820
PrivateKey = ***
[Peer]
PublicKey = ***
AllowedIPs = 10.252.1.2/32
Endpoint = #s2
[Peer]
PublicKey = ***
AllowedIPs = 10.252.1.3/32
Endpoint = #s3
This is my firewall configuration :
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:FILTERS - [0:0]
:DOCKER-USER - [0:0]
-F INPUT
-F DOCKER-USER
-F FILTERS
-A INPUT -i lo -j ACCEPT
-A INPUT -j FILTERS
-A DOCKER-USER -i ens3 -j FILTERS
-A FILTERS -m state --state ESTABLISHED,RELATED -j ACCEPT
-A FILTERS -p icmp --icmp-type echo-request -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A FILTERS -p udp --dport 51820 -j ACCEPT
-A FILTERS -s 10.252.1.0/24 -j ACCEPT
-A FILTERS -j REJECT --reject-with icmp-port-unreachable
COMMIT
Any ideas ? Am i missing something ?

Related

portquiz.net dlp test with python

I am trying to get a response from portquiz.net when probing port 80. For example, if we do this:
curl portquiz.net:80
we get the response:
Port 80 test successful!
Here is the python code:
import socket
server = "portquiz.net"
port = 80
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((server, port))
response = s.recv(1024)
print(repr(response))
With this code I get no response, the script just seems to hang.
Is this an issue with my code or is it something to do with portquiz's server?
Fingers, that is an HTTP server, so you need to make a GET request to it. As soon as you connect it is waiting for you to send data, that is why it hangs.
You can do this more easily with an HTTP library, however, if you want to use socket here is the code, with example run:
(xcve) ttucker#plato:~/tmp/stackoverflow/portquiz.net$ cat test.py
import socket
server = "portquiz.net"
port = 80
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((server, port))
# Send HTTP GET request to /
s.send('GET / HTTP/1.1\r\nHOST: {}\r\n\r\n'.format(server).encode())
response = s.recv(1024)
print(repr(response))
(xcve) ttucker#plato:~/tmp/stackoverflow/portquiz.net$ python test.py
b'HTTP/1.1 200 OK\r\nDate: Sun, 21 Jul 2019 22:25:32 GMT\r\nServer: Apache/2.4.29 (Ubuntu)\r\nVary: Accept-Encoding\r\nContent-Length: 2747\r\nContent-Type: text/html; charset=UTF-8\r\n\r\n\n<html>\n<head>\n<title>Outgoing Port Tester</title>\n<style type="text/css">\nbody {\n\tfont-family: sans-serif;\n\tfont-size: 0.9em;\n}\n</style>\n\n</head>\n\n<body>\n<h1>Outgoing port tester</h1>\n\nThis server listens on all TCP ports, allowing you to test any outbound TCP port.\n\n<p>\nYou have reached this page on port <b>80</b>.<br/>\n</p>\n\nYour network allows you to use this port.\n(Assuming that your network is not doing advanced traffic filtering.)\n\n<p>\nNetwork service: http<br/>\nYour outgoing IP: 207.135.66.186</p>\n\n<h2>Test a port using a command</h2>\n\n<pre>\n$ telnet portquiz.net 80 \nTrying ...\nConnected to portquiz.net.\nEscape character is \'^]\'.\n</pre>\n<pre>\n$ nc -v portquiz.net 80 \nConnection to portquiz.net 80 port [tcp/daytime] succeeded!\n</pre>\n<pre>\n$ curl portquiz.net:80 \nPort 80 test successful!\nYour IP: 207.135.66.186</pre>\n<pre>\n$ wget -q'
Further thoughts on this, it looks like the sight might be looking for a CURL header when to determine which version - either HTML, or TEXT - that it sends. It may behoove you to specify the same header after the "HOST: " header that CURL does so the response is easier to parse.
For the example of it with a curl header, and how I figured out what to put there:
(xcve) ttucker#plato:~/tmp/stackoverflow/portquiz.net$ curl portquiz.net:80 -vvvv
* Rebuilt URL to: portquiz.net:80/
* Hostname was NOT found in DNS cache
* Trying 52.47.209.216...
* Connected to portquiz.net (52.47.209.216) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: portquiz.net
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Sun, 21 Jul 2019 22:34:36 GMT
* Server Apache/2.4.29 (Ubuntu) is not blacklisted
< Server: Apache/2.4.29 (Ubuntu)
< Content-Length: 49
< Content-Type: text/html; charset=UTF-8
<
Port 80 test successful!
Your IP: 207.135.66.186
* Connection #0 to host portquiz.net left intact
(xcve) ttucker#plato:~/tmp/stackoverflow/portquiz.net$ cat test.py
import socket
server = "portquiz.net"
port = 80
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((server, port))
get_request = "GET / HTTP/1.1\r\nHOST: {}\r\n" \
"User-Agent: curl/7.35.0\r\n\r\n".format(server)
s.send(get_request.encode())
response = s.recv(1024)
print(repr(response))
(xcve) ttucker#plato:~/tmp/stackoverflow/portquiz.net$ python test.py
b'HTTP/1.1 200 OK\r\nDate: Sun, 21 Jul 2019 22:34:47 GMT\r\nServer: Apache/2.4.29 (Ubuntu)\r\nContent-Length: 49\r\nContent-Type: text/html; charset=UTF-8\r\n\r\nPort 80 test successful!\nYour IP: 207.135.66.186\n'

CouchDB can't get cookie auth session for nonadmin user

For admin user:
$ curl -X POST localhost:5984/_session -d "username=admin&password=admin"
{"ok":true,"name":"admin","roles":["_admin"]}
$ curl -vX GET localhost:5984/_session --cookie AuthSession=YWRtaW...
{"ok":true,"userCtx":{"name":"admin","roles":["_admin"]},"info":{"authentication_db":"_users","authentication_handlers":["cookie","default"],"authenticated":"cookie"}}
but for regular user:
$ curl -vX POST localhost:5984/_session -d "username=user&password=123"
{"ok":true,"name":"user","roles":["users"]}
$ curl -vX GET localhost:5984/_session --cookie AuthSession=ZGlqbzo...
{"ok":true,"userCtx":{"name":null,"roles":[]},"info":{"authentication_db":"_users","authentication_handlers":["cookie","default"]}}
The same thing happens when im doing XmlHttpRequest via iron-ajax element, or simply from chrome. What am I doing wrong?
CouchDB version: 2.1.1
Config:
[chttpd]
bind_address = 0.0.0.0
port = 5984
authentication_handlers = {couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}
[httpd]
enable_cors = true
[couch_httpd_auth]
allow_persistent_cookies = true
timeout = 60000
[cors]
credentials = true
origins = *
headers = accept, authorization, content-type, origin, referer
methods = GET, PUT, POST, HEAD, DELETE
I didn't quite get your problem, but here is what I do with curl to authenticate with cookie as a nonadmin user:
First I run curl with -v option to see the header fields:
$ curl -k -v -X POST https://192.168.1.106:6984/_session -d 'username=jan&password=****'
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 192.168.1.106...
* Connected to 192.168.1.106 (192.168.1.106) port 6984 (#0)
* found 148 certificates in /etc/ssl/certs/ca-certificates.crt
* found 604 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* server certificate verification SKIPPED
* server certificate status verification SKIPPED
* error fetching CN from cert:The requested data were not available.
* common name: (does not match '192.168.1.106')
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: O=Tech Studio
* start date: Sat, 31 Mar 2018 04:37:51 GMT
* expire date: Tue, 30 Mar 2021 04:37:51 GMT
* issuer: O=Tech Studio
* compression: NULL
* ALPN, server did not agree to a protocol
> POST /_session HTTP/1.1
> Host: 192.168.1.106:6984
> User-Agent: curl/7.47.0
> Accept: */*
> Content-Length: 25
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 25 out of 25 bytes
< HTTP/1.1 200 OK
< Set-Cookie: AuthSession=amFuOjVBRTk3MENGOuKAb68qYzf5jJ7bIOq72Jlfw-Qb; Version=1; Secure; Path=/; HttpOnly
< Server: CouchDB/2.1.1 (Erlang OTP/18)
< Date: Wed, 02 May 2018 08:03:27 GMT
< Content-Type: application/json
< Content-Length: 44
< Cache-Control: must-revalidate
<
{"ok":true,"name":"jan","roles":["sample"]}
* Connection #0 to host 192.168.1.106 left intact
I see in the above header fields the cookie:
Set-Cookie: AuthSession=amFuOjVBRTk3MENGOuKAb68qYzf5jJ7bIOq72Jlfw-Qb; Version=1; Secure; Path=/; HttpOnly
I use the above cookie to authenticate as a nonadmin user and get the user info for the same nonadmin user like this:
$ curl -k -X GET https://192.168.1.106:6984/_users/org.couchdb.user:jan -H 'Cookie: AuthSession=amFuOjVBRTk3MENGOuKAb68qYzf5jJ7bIOq72Jlfw-Qb'
{"_id":"org.couchdb.user:jan","_rev":"3-f11b227a6e1236fa502af668fdbf326d","name":"jan","roles":["sample"],"type":"user","password_scheme":"pbkdf2","iterations":10,"derived_key":"a973123ebd9dbc2a543d477a506268b018e7aab4","salt":"0ef2111a894062b08ffd723fd34b6b75"}
Problem gone when i removed from my local.ini
authentication_handlers = {couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}
Because i used incorrect handler: couch_httpd_auth in the config for chttpd, when that handler is only written to work with the original couch_httpd module

node silently closes requests with literal space in url

Let's start simple server:
var http = require('http');
http.createServer(function (req, res) {
console.log('asdasd');
res.end('asdasd');
}).listen(8898)
And make a simple request
curl -v 'localhost:8898/?ab'
* Trying ::1...
* Connected to localhost (::1) port 8898 (#0)
> GET /?ab HTTP/1.1
> Host: localhost:8898
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 13 Oct 2016 20:26:14 GMT
< Connection: keep-alive
< Content-Length: 6
<
* Connection #0 to host localhost left intact
asdasd
Looks like everything is all right.
But if we add a literal space to it...
cornholio-osx:~/>curl -v 'localhost:8898/?a b'
* Trying ::1...
* Connected to localhost (::1) port 8898 (#0)
> GET /?a b HTTP/1.1
> Host: localhost:8898
> User-Agent: curl/7.43.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
Nothing is logged and no body is written.
I assume, literal spaces in URLs are violation of HTTP protocol but is this behavior HTTP-complaint?

Difference between curl expressions

I have an API server running at localhost:3000 and I am trying to query it using these two expressions:
[wani#lenovo ilparser-docker]$ time (curl "localhost:3000/parse?lang=hin&data=देश" )
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.023s
user 0m0.009s
sys 0m0.004s
[wani#lenovo ilparser-docker]$ time (curl -XGET localhost:3000/parse -F lang=hin -F data="देश" )
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m1.101s
user 0m0.020s
sys 0m0.070s
Why does the second expression take so much more time?
With more verbosity:
[wani#lenovo ilparser-docker]$ time curl -v localhost:3000/parse -F lang=hin -F data="देश"
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
> POST /parse HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 244
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=------------------------1eb5e5991b976cb1
>
* Done waiting for 100-continue
< HTTP/1.1 200 OK
< Content-Length: 70
< Server: Mojolicious (Perl)
< Content-Type: application/json;charset=UTF-8
< Date: Mon, 21 Mar 2016 11:06:09 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m1.106s
user 0m0.027s
sys 0m0.068s
[wani#lenovo ilparser-docker]$ time curl -v localhost:3000/parse --data lang=hin --data data="देश"
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> POST /parse HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 23
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 23 out of 23 bytes
< HTTP/1.1 200 OK
< Server: Mojolicious (Perl)
< Content-Length: 70
< Connection: keep-alive
< Date: Mon, 21 Mar 2016 11:06:24 GMT
< Content-Type: application/json;charset=UTF-8
<
* Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.031s
user 0m0.011s
sys 0m0.003s
Expect: 100-continue sounded fishy, so I cleared that header:
[wani#lenovo ilparser-docker]$ time curl -v -F lang=hin -F data="देश" "localhost:3000/parse" -H Expect: --trace-time
16:48:04.513691 * Trying 127.0.0.1...
16:48:04.513933 * Connected to localhost (127.0.0.1) port 3000 (#0)
16:48:04.514083 * Initializing NSS with certpath: sql:/etc/pki/nssdb
16:48:04.610095 > POST /parse HTTP/1.1
16:48:04.610095 > Host: localhost:3000
16:48:04.610095 > User-Agent: curl/7.43.0
16:48:04.610095 > Accept: */*
16:48:04.610095 > Content-Length: 244
16:48:04.610095 > Content-Type: multipart/form-data; boundary=------------------------24f30647b16ba82d
16:48:04.610095 >
16:48:04.618107 < HTTP/1.1 200 OK
16:48:04.618194 < Content-Length: 70
16:48:04.618249 < Server: Mojolicious (Perl)
16:48:04.618306 < Content-Type: application/json;charset=UTF-8
16:48:04.618370 < Date: Mon, 21 Mar 2016 11:18:04 GMT
16:48:04.618430 < Connection: keep-alive
16:48:04.618492 <
16:48:04.618590 * Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.117s
user 0m0.023s
sys 0m0.082s
Now the only time taking thing left is: Initializing NSS with certpath: sql:/etc/pki/nssdb. Why does curl do that in this context?
After a little help on IRC from #DanielStenberg, I came to know that the db load is present because curl inits nss in that case since curl needs a good random source for the boundary separator used for -F . Curl could have used getrandom() syscall or read bits out of /dev/urandom since boundary separators don't need to be cryptographically secure in any way, but curl just wants secure random in some other places so curl reuses the random function that it already has.

nodejs digest authentication failing

I'm trying to send an HTTP GET request and authenticate using digest method upon receiving the authentication header. I keep getting 401 Unauthorized response even though my code generates identical authentication response to Firefox and curl given the same authentication request. I have tried the popular Nodejs module "request" with the same results. Here's the tcpdump output of two requests. The first is from Firefox which succeeds:
15:18:03.615255 IP 192.168.18.1.33966 > 192.168.20.220.30005: tcp 0
....E..<..#.#.............u5K3........9..\.........
"`..........
15:18:03.634223 IP 192.168.20.220.30005 > 192.168.18.1.33966: tcp 0
....E..<..#.=...........u5......K3.................
.g.t"`......
15:18:03.634269 IP 192.168.18.1.33966 > 192.168.20.220.30005: tcp 0
....E..4..#.#.............u5K3.........s.T.....
"`...g.t
15:18:03.735485 IP 192.168.18.1.33966 > 192.168.20.220.30005: tcp 290
....E..V..#.#.............u5K3.........s.v.....
"`...g.tGET / HTTP/1.1
Host: 192.168.20.220:30005
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:27.0) Gecko/20100101 Firefox/27.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
15:18:03.753943 IP 192.168.20.220.30005 > 192.168.18.1.33966: tcp 0
X........c#.=..2........u5......K3.1..
.g.."`..
15:18:03.762129 IP 192.168.20.220.30005 > 192.168.18.1.33966: tcp 228
X........d#.=..M........u5......K3.1..
.g.."`..HTTP/1.1 401 Unauthorized
Content-Length: 0
WWW-Authenticate: Digest realm="IgdAuthentication", domain="/", nonce="ZDE4NTY3ZmM6NmYyMzA3NjM6YmQ5NGY3YTA=", qop="auth", algorithm=MD5, opaque="5ccc09c403ebaf9f0171e9517f40e41"
15:18:03.762172 IP 192.168.18.1.33966 > 192.168.20.220.30005: tcp 0
....E..4..#.#.............u5K3.1.......{.T.....
"`...g..
15:18:06.215945 IP 192.168.18.1.33966 > 192.168.20.220.30005: tcp 564
....E..h..#.#.............u5K3.1.......{.......
"`...g..GET / HTTP/1.1
Host: 192.168.20.220:30005
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:27.0) Gecko/20100101 Firefox/27.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Authorization: Digest username="admin", realm="IgdAuthentication", nonce="ZDE4NTY3ZmM6NmYyMzA3NjM6YmQ5NGY3YTA=", uri="/", algorithm=MD5, response="ae43f4fcaf71340f9c360877dad87c66", opaque="5ccc09c403ebaf9f0171e9517f40e41", qop=auth, nc=00000001, cnonce="9d1ea29022ec08d6"
15:18:06.244925 IP 192.168.20.220.30005 > 192.168.18.1.33966: tcp 38
....E..Z.e#.=..
........u5......K3.e.....Q.....
.g.."`..HTTP/1.1 200 OK
Content-Length: 0
The second is from my code which fails:
15:19:08.589647 IP 192.168.18.1.33972 > 192.168.20.220.30005: tcp 0
....E..<..#.#.............u5a.........9..\.........
"`A.........
15:19:08.608304 IP 192.168.20.220.30005 > 192.168.18.1.33972: tcp 0
....E..<..#.=...........u5....#<a.......aN.........
.h.<"`A.....
15:19:08.608333 IP 192.168.18.1.33972 > 192.168.20.220.30005: tcp 0
....E..4..#.#.............u5a.....#=...s.T.....
"`A..h.<
15:19:08.608872 IP 192.168.18.1.33972 > 192.168.20.220.30005: tcp 70
....E..z..#.#.............u5a.....#=...s.......
"`A..h.<GET / HTTP/1.1
Host: 192.168.20.220:30005
Connection: keep-alive
15:19:08.626556 IP 192.168.20.220.30005 > 192.168.18.1.33972: tcp 0
....E..4.r#.=..#........u5....#=a......#.......
.h.O"`A.
15:19:08.631951 IP 192.168.20.220.30005 > 192.168.18.1.33972: tcp 228
....E....s#.=..>........u5....#=a......#A......
.h.Q"`A.HTTP/1.1 401 Unauthorized
Content-Length: 0
WWW-Authenticate: Digest realm="IgdAuthentication", domain="/", nonce="YmM4ZWY0YjE6MWY4ZjVkMmQ6IGIwNjdkZWI=", qop="auth", algorithm=MD5, opaque="5ccc09c403ebaf9f0171e9517f40e41"
15:19:08.631966 IP 192.168.18.1.33972 > 192.168.20.220.30005: tcp 0
....E..4..#.#.............u5a.....$!...{.T.....
"`A".h.Q
15:19:08.634442 IP 192.168.18.1.33973 > 192.168.20.220.30005: tcp 0
....E..<..#.#.
...........u5...$......9..\.........
"`A#........
15:19:08.653166 IP 192.168.20.220.30005 > 192.168.18.1.33973: tcp 0
....E..<..#.=...........u5....~w...%....3..........
.h.i"`A#....
15:19:08.653201 IP 192.168.18.1.33973 > 192.168.20.220.30005: tcp 0
....E..4..#.#.
...........u5...%..~x...s.T.....
"`A'.h.i
15:19:08.653534 IP 192.168.18.1.33973 > 192.168.20.220.30005: tcp 524
....E..#. #.#.............u5...%..~x...s.`.....
"`A(.h.iGET / HTTP/1.1
Authorization: Digest username="admin", realm="IgdAuthentication", nonce="YmM4ZWY0YjE6MWY4ZjVkMmQ6IGIwNjdkZWI=", uri="/", algorithm=MD5, response="1d0539755e0e2ca204a9821027041e8b", qop=auth, nc=00000001, cnonce="MjMwMjkw", opaque="5ccc09c403ebaf9f0171e9517f40e41"
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:27.0) Gecko/20100101 Firefox/27.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Host: 192.168.20.220:30005
Connection: keep-alive
15:19:08.672345 IP 192.168.20.220.30005 > 192.168.18.1.33973: tcp 0
Xi........#.=...........u5....~x...1..
.h.|"`A(
15:19:10.633047 IP 192.168.18.1.33972 > 192.168.20.220.30005: tcp 0
....E..4..#.#.............u5a.....$!...{.T.....
"`C..h.Q
15:19:10.651962 IP 192.168.20.220.30005 > 192.168.18.1.33972: tcp 0
....E..4.t#.=..!........u5....$!a......#.&.....
.h.8"`C.
15:19:10.651998 IP 192.168.18.1.33972 > 192.168.20.220.30005: tcp 0
....E..4..#.#.............u5a.....$"...{.T.....
"`C..h.8
15:19:10.653565 IP 192.168.18.1.33973 > 192.168.20.220.30005: tcp 0
....E..4.
#.#.
...........u5...1..~x...s.T.....
"`C..h.|
15:19:10.711119 IP 192.168.20.220.30005 > 192.168.18.1.33973: tcp 0
X_........#.=...........u5....~x...2..
.h.s"`C.
15:19:12.674799 IP 192.168.20.220.30005 > 192.168.18.1.33973: tcp 228
X.........#.=...........u5....~x...2..
.h.."`C.HTTP/1.1 401 Unauthorized
Content-Length: 0
WWW-Authenticate: Digest realm="IgdAuthentication", domain="/", nonce="YWU1ZjhkMWM6MzFmZjllMDA6YzAxNjY4MGM=", qop="auth", algorithm=MD5, opaque="5ccc09c403ebaf9f0171e9517f40e41"
What could be different between the two requests that's causing this?
The problem was caused by the authentication challenge response being sent on a separate connection, rendering the authentication challenge invalid. The solution is to use an agent with maxSockets set to 1 (to avoid opening a new connection) and making sure the socket doesn't get closed before the second request is queued.

Resources