I am trying to get a response from portquiz.net when probing port 80. For example, if we do this:
curl portquiz.net:80
we get the response:
Port 80 test successful!
Here is the python code:
import socket
server = "portquiz.net"
port = 80
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((server, port))
response = s.recv(1024)
print(repr(response))
With this code I get no response, the script just seems to hang.
Is this an issue with my code or is it something to do with portquiz's server?
Fingers, that is an HTTP server, so you need to make a GET request to it. As soon as you connect it is waiting for you to send data, that is why it hangs.
You can do this more easily with an HTTP library, however, if you want to use socket here is the code, with example run:
(xcve) ttucker#plato:~/tmp/stackoverflow/portquiz.net$ cat test.py
import socket
server = "portquiz.net"
port = 80
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((server, port))
# Send HTTP GET request to /
s.send('GET / HTTP/1.1\r\nHOST: {}\r\n\r\n'.format(server).encode())
response = s.recv(1024)
print(repr(response))
(xcve) ttucker#plato:~/tmp/stackoverflow/portquiz.net$ python test.py
b'HTTP/1.1 200 OK\r\nDate: Sun, 21 Jul 2019 22:25:32 GMT\r\nServer: Apache/2.4.29 (Ubuntu)\r\nVary: Accept-Encoding\r\nContent-Length: 2747\r\nContent-Type: text/html; charset=UTF-8\r\n\r\n\n<html>\n<head>\n<title>Outgoing Port Tester</title>\n<style type="text/css">\nbody {\n\tfont-family: sans-serif;\n\tfont-size: 0.9em;\n}\n</style>\n\n</head>\n\n<body>\n<h1>Outgoing port tester</h1>\n\nThis server listens on all TCP ports, allowing you to test any outbound TCP port.\n\n<p>\nYou have reached this page on port <b>80</b>.<br/>\n</p>\n\nYour network allows you to use this port.\n(Assuming that your network is not doing advanced traffic filtering.)\n\n<p>\nNetwork service: http<br/>\nYour outgoing IP: 207.135.66.186</p>\n\n<h2>Test a port using a command</h2>\n\n<pre>\n$ telnet portquiz.net 80 \nTrying ...\nConnected to portquiz.net.\nEscape character is \'^]\'.\n</pre>\n<pre>\n$ nc -v portquiz.net 80 \nConnection to portquiz.net 80 port [tcp/daytime] succeeded!\n</pre>\n<pre>\n$ curl portquiz.net:80 \nPort 80 test successful!\nYour IP: 207.135.66.186</pre>\n<pre>\n$ wget -q'
Further thoughts on this, it looks like the sight might be looking for a CURL header when to determine which version - either HTML, or TEXT - that it sends. It may behoove you to specify the same header after the "HOST: " header that CURL does so the response is easier to parse.
For the example of it with a curl header, and how I figured out what to put there:
(xcve) ttucker#plato:~/tmp/stackoverflow/portquiz.net$ curl portquiz.net:80 -vvvv
* Rebuilt URL to: portquiz.net:80/
* Hostname was NOT found in DNS cache
* Trying 52.47.209.216...
* Connected to portquiz.net (52.47.209.216) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: portquiz.net
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Sun, 21 Jul 2019 22:34:36 GMT
* Server Apache/2.4.29 (Ubuntu) is not blacklisted
< Server: Apache/2.4.29 (Ubuntu)
< Content-Length: 49
< Content-Type: text/html; charset=UTF-8
<
Port 80 test successful!
Your IP: 207.135.66.186
* Connection #0 to host portquiz.net left intact
(xcve) ttucker#plato:~/tmp/stackoverflow/portquiz.net$ cat test.py
import socket
server = "portquiz.net"
port = 80
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((server, port))
get_request = "GET / HTTP/1.1\r\nHOST: {}\r\n" \
"User-Agent: curl/7.35.0\r\n\r\n".format(server)
s.send(get_request.encode())
response = s.recv(1024)
print(repr(response))
(xcve) ttucker#plato:~/tmp/stackoverflow/portquiz.net$ python test.py
b'HTTP/1.1 200 OK\r\nDate: Sun, 21 Jul 2019 22:34:47 GMT\r\nServer: Apache/2.4.29 (Ubuntu)\r\nContent-Length: 49\r\nContent-Type: text/html; charset=UTF-8\r\n\r\nPort 80 test successful!\nYour IP: 207.135.66.186\n'
Related
I setup a docker swarm with 3 nodes :
s1 : manager + worker
s2 : worker
s3 : worker
I deployed a nginx as a reverse proxy to a docker swarm service on each node with publishing port as mode=host to get the real ip. Nginx works "fine", i'am able to serve static content, use over https, etc ...
The part which doesn't work is the reverse_proxy :
if the nginx and the service are on the same node, everything works
if the nginx and the service aren't one the same node, i can only GET / because others requests ( like /css/style.css ) will fails with 499 ( from nginx point )
nginx network is an overlay network swarm-scopped and ip forwarding is enabled.
Here is my nginx configuration :
server {
listen 80;
server_name service.foo.bar;
location / {
proxy_pass http://service:80;
}
}
server {
listen 443 ssl;
server_name service.foo.bar;
ssl_certificate /ssl/service.foo.bar/fullchain.pem;
ssl_certificate_key /ssl/service.foo.bar/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://service:80;
}
}
Here is how i deployed my nginx :
docker service create --name nginx --mount /etc/nginx/nginx.conf:/etc/nginx/nginx.conf --mode=global --publish mode=host,published=80,target=80 --publish mode=host,published=443,target=443 --network nginx nginx
If i curl the node who hosts the service :
* TCP_NODELAY set
* Connected to service.foo.bar port 80 (#0)
> GET /app/polyfills.d92dcdb0a986e964fec8.js HTTP/1.1
> Host: service.foo.bar
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.21.6
< Date: Mon, 13 Jun 2022 15:38:36 GMT
< Content-Type: application/javascript
< Content-Length: 257750
< Connection: keep-alive
< cache-control: public, immutable, max-age=604800
< expires: Mon, 20 Jun 2022 15:38:36 GMT
< permissions-policy: accelerometer=(), ambient-light-sensor=(), autoplay=(), camera=(), encrypted-media=(), fullscreen=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), midi=(), payment=(), picture-in-picture=(), sync-xhr=(self "https://haveibeenpwned.com" "https://2fa.directory"), usb=(), vr=()
< x-content-type-options: nosniff
< x-frame-options: SAMEORIGIN
< referrer-policy: same-origin
< x-xss-protection: 0
<
/*! For license information please see polyfills.d92dcdb0a986e964fec8.js.LICENSE.txt */
[...]
If i curl a node which doesn't host the service :
* TCP_NODELAY set
* Connected to service.foo.bar port 80 (#0)
> GET /app/polyfills.d92dcdb0a986e964fec8.js HTTP/1.1
> Host: service.foo.bar
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.21.6
< Date: Mon, 13 Jun 2022 15:38:25 GMT
< Content-Type: application/javascript
< Content-Length: 257750
< Connection: keep-alive
< cache-control: public, immutable, max-age=604800
< expires: Mon, 20 Jun 2022 15:38:25 GMT
< permissions-policy: accelerometer=(), ambient-light-sensor=(), autoplay=(), camera=(), encrypted-media=(), fullscreen=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), midi=(), payment=(), picture-in-picture=(), sync-xhr=(self "https://haveibeenpwned.com" "https://2fa.directory"), usb=(), vr=()
< x-content-type-options: nosniff
< x-frame-options: SAMEORIGIN
< referrer-policy: same-origin
< x-xss-protection: 0
<
* transfer closed with 257750 bytes remaining to read
* Closing connection 0
curl: (18) transfer closed with 257750 bytes remaining to read
nginx log say :
nginx.0.scembp2e9iqp#s3 | 2022/06/13 15:38:36 [warn] 23#23: *114 an upstream response is buffered to a temporary file /var/cache/nginx/proxy_temp/5/00/0000000005 while reading upstream, client: #ip, server: service.foo.bar, request: "GET /app/polyfills.d92dcdb0a986e964fec8.js HTTP/1.1", upstream: "http://10.0.4.56:80/app/polyfills.d92dcdb0a986e964fec8.js", host: "service.foo.bar"
My nodes are connected each others over wireguard, this is my routing table :
default via #ip dev ens3
#ip dev ens3 scope link
10.252.1.0/24 dev wg0 proto kernel scope link src 10.252.1.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.19.0.0/16 dev docker_gwbridge proto kernel scope link src 172.19.0.1
Here is my wireguard configuration :
[Interface]
Address = 10.252.1.1/24
ListenPort = 51820
PrivateKey = ***
[Peer]
PublicKey = ***
AllowedIPs = 10.252.1.2/32
Endpoint = #s2
[Peer]
PublicKey = ***
AllowedIPs = 10.252.1.3/32
Endpoint = #s3
This is my firewall configuration :
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:FILTERS - [0:0]
:DOCKER-USER - [0:0]
-F INPUT
-F DOCKER-USER
-F FILTERS
-A INPUT -i lo -j ACCEPT
-A INPUT -j FILTERS
-A DOCKER-USER -i ens3 -j FILTERS
-A FILTERS -m state --state ESTABLISHED,RELATED -j ACCEPT
-A FILTERS -p icmp --icmp-type echo-request -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A FILTERS -p udp --dport 51820 -j ACCEPT
-A FILTERS -s 10.252.1.0/24 -j ACCEPT
-A FILTERS -j REJECT --reject-with icmp-port-unreachable
COMMIT
Any ideas ? Am i missing something ?
sendFile is for sending files and it also figures out some interesting headers from the file (like content length). For a HEAD request I would ideally want the exact same headers but just skip the body.
There doesn't seem to be an option for this in the API. Maybe I can override something in the response object to stop it from sending anything?
Here's what I got:
res.sendFile(file, { headers: hdrs, lastModified: false, etag: false })
Has anyone solved this?
As Robert Klep has already written, the sendFile already has the required behavior of sending the headers and not sending the body if the request method is HEAD.
In addition to that, Express already handles HEAD requests for routes that have GET handlers defined. So you don't even need to define any HEAD handler explicitly.
Example:
let app = require('express')();
let file = __filename;
let hdrs = {'X-Custom-Header': '123'};
app.get('/file', (req, res) => {
res.sendFile(file, { headers: hdrs, lastModified: false, etag: false });
});
app.listen(3322, () => console.log('Listening on 3322'));
This sends its own source code on GET /file as can be demonstrated with:
$ curl -v -X GET localhost:3322/file
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3322 (#0)
> GET /file HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:3322
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< X-Custom-Header: 123
< Accept-Ranges: bytes
< Cache-Control: public, max-age=0
< Content-Type: application/javascript
< Content-Length: 267
< Date: Tue, 11 Apr 2017 10:45:36 GMT
< Connection: keep-alive
<
[...]
The [...] is the body that was not included here.
Without adding any new handler this will also work:
$ curl -v -X HEAD localhost:3322/file
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3322 (#0)
> HEAD /file HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:3322
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< X-Custom-Header: 123
< Accept-Ranges: bytes
< Cache-Control: public, max-age=0
< Content-Type: application/javascript
< Content-Length: 267
< Date: Tue, 11 Apr 2017 10:46:29 GMT
< Connection: keep-alive
<
This is the same but with no body.
Express uses send to implement sendFile, which already does exactly what you want.
Let's start simple server:
var http = require('http');
http.createServer(function (req, res) {
console.log('asdasd');
res.end('asdasd');
}).listen(8898)
And make a simple request
curl -v 'localhost:8898/?ab'
* Trying ::1...
* Connected to localhost (::1) port 8898 (#0)
> GET /?ab HTTP/1.1
> Host: localhost:8898
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 13 Oct 2016 20:26:14 GMT
< Connection: keep-alive
< Content-Length: 6
<
* Connection #0 to host localhost left intact
asdasd
Looks like everything is all right.
But if we add a literal space to it...
cornholio-osx:~/>curl -v 'localhost:8898/?a b'
* Trying ::1...
* Connected to localhost (::1) port 8898 (#0)
> GET /?a b HTTP/1.1
> Host: localhost:8898
> User-Agent: curl/7.43.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
Nothing is logged and no body is written.
I assume, literal spaces in URLs are violation of HTTP protocol but is this behavior HTTP-complaint?
I have an API server running at localhost:3000 and I am trying to query it using these two expressions:
[wani#lenovo ilparser-docker]$ time (curl "localhost:3000/parse?lang=hin&data=देश" )
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.023s
user 0m0.009s
sys 0m0.004s
[wani#lenovo ilparser-docker]$ time (curl -XGET localhost:3000/parse -F lang=hin -F data="देश" )
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m1.101s
user 0m0.020s
sys 0m0.070s
Why does the second expression take so much more time?
With more verbosity:
[wani#lenovo ilparser-docker]$ time curl -v localhost:3000/parse -F lang=hin -F data="देश"
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
> POST /parse HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 244
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=------------------------1eb5e5991b976cb1
>
* Done waiting for 100-continue
< HTTP/1.1 200 OK
< Content-Length: 70
< Server: Mojolicious (Perl)
< Content-Type: application/json;charset=UTF-8
< Date: Mon, 21 Mar 2016 11:06:09 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m1.106s
user 0m0.027s
sys 0m0.068s
[wani#lenovo ilparser-docker]$ time curl -v localhost:3000/parse --data lang=hin --data data="देश"
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> POST /parse HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 23
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 23 out of 23 bytes
< HTTP/1.1 200 OK
< Server: Mojolicious (Perl)
< Content-Length: 70
< Connection: keep-alive
< Date: Mon, 21 Mar 2016 11:06:24 GMT
< Content-Type: application/json;charset=UTF-8
<
* Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.031s
user 0m0.011s
sys 0m0.003s
Expect: 100-continue sounded fishy, so I cleared that header:
[wani#lenovo ilparser-docker]$ time curl -v -F lang=hin -F data="देश" "localhost:3000/parse" -H Expect: --trace-time
16:48:04.513691 * Trying 127.0.0.1...
16:48:04.513933 * Connected to localhost (127.0.0.1) port 3000 (#0)
16:48:04.514083 * Initializing NSS with certpath: sql:/etc/pki/nssdb
16:48:04.610095 > POST /parse HTTP/1.1
16:48:04.610095 > Host: localhost:3000
16:48:04.610095 > User-Agent: curl/7.43.0
16:48:04.610095 > Accept: */*
16:48:04.610095 > Content-Length: 244
16:48:04.610095 > Content-Type: multipart/form-data; boundary=------------------------24f30647b16ba82d
16:48:04.610095 >
16:48:04.618107 < HTTP/1.1 200 OK
16:48:04.618194 < Content-Length: 70
16:48:04.618249 < Server: Mojolicious (Perl)
16:48:04.618306 < Content-Type: application/json;charset=UTF-8
16:48:04.618370 < Date: Mon, 21 Mar 2016 11:18:04 GMT
16:48:04.618430 < Connection: keep-alive
16:48:04.618492 <
16:48:04.618590 * Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.117s
user 0m0.023s
sys 0m0.082s
Now the only time taking thing left is: Initializing NSS with certpath: sql:/etc/pki/nssdb. Why does curl do that in this context?
After a little help on IRC from #DanielStenberg, I came to know that the db load is present because curl inits nss in that case since curl needs a good random source for the boundary separator used for -F . Curl could have used getrandom() syscall or read bits out of /dev/urandom since boundary separators don't need to be cryptographically secure in any way, but curl just wants secure random in some other places so curl reuses the random function that it already has.
I am trying to figured out how to query using CURL on the "jayson" npm package for nodejs available from here https://github.com/tedeh/jayson. I am using the test program below on the server side. Node is running properly and respond to Curl with an error output. no way to find what should be passed in Curl or what should be changed on the node program:
var jayson = require(__dirname + '/../..');
var server = jayson.server({
echo: function(msg, callback) {
if(msg != null)
callback(null, msg);
},
add: function(a, b, callback) {
if( (a!= null) && (b!= null) )
callback(null, a + b);
}
});
server.http().listen(90);
this is the CURL command:
$ curl -v -i -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0", "id":"curltest", "method":"echo", "params": ["hello"] }' http://localhost:90
and this is the nodejs answer received by Curl if the curl command is not compliant to jayson:
* About to connect() to localhost port 90 (#0)
* Trying 127.0.0.1...
* connected
* Connected to localhost (127.0.0.1) port 90 (#0)
> POST / HTTP/1.1
> User-Agent: curl/7.28.1
> Host: localhost:90
> Accept: */*
> Content-Type: application/json
> Content-Length: 16
>
* upload completely sent off: 16 out of 16 bytes
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Content-Length: 79
Content-Length: 79
< Content-Type: application/json
Content-Type: application/json
< Date: Tue, 25 Mar 2014 19:26:21 GMT
Date: Tue, 25 Mar 2014 19:26:21 GMT
< Connection: keep-alive
Connection: keep-alive
<
* Connection #0 to host localhost left intact
{"jsonrpc":"2.0","id":null,"error":{"code":-32600,"message":"Invalid request"}}* Closing connection #0
Many thanks for the help,
Rémi
You are running the curl from windows. That's why the single quote around the parameters doesn't work for you. Change them into double quotes:
curl -i -X POST -H "Content-Type: application/json" -d "{\"echo\": \"Name\"}" http://localhost:90/?
Also, try to run the command with -v appending at the end. It will show you the debug msg of curl. Show us the msg if it doesn't work for you.
I found the trick and I have corrected the error above. We should be jsonrpc 2.0 compliant as Jayson is considering jsonrpc2 only.
This is the correct curl command line for the "echo" script:
$ curl -v -i -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0", "id":"curltest", "method":"echo", "params": ["hello"] }' localhost:90
This is the correct curl command line for the "add" script:
$ curl -v -i -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0", "id":"curltest", "method":"add", "params": [1, 2] }' http://localhost:90