Using nc, I was able to download one file with:
GET /index.html HTTP/1.1
Host: {insert host name}
But I can't figure out how to download multiple files in one request
Related
I am managing a site hosted on aws ec2 using nginx. To avoid threats continuously monitoring nginx logs ( access.log & error.log). Though many threats are well managed by tweaking nginx.conf, but this specific one I am not even able to figure out how attacker manage to send such request.
access.log
xx.xxx.xx.xxx - - [18/Aug/2021:09:04:13 +0000] "GET http://xxxxxxxxx.com/ HTTP/1.1" 200 1400 "-" "Go-http-client/1.1"
In above case let's say name of my website is "h ttp://abc-xyz-1234.com", attacker is passing url in path (i.e. http://xxxxxxxxx.com/ ), and nginx responding with "200". I am still scratching my head how was request made and what was responded with 1400 of bytes ( response length still much lesser than website response site for path "/" ).
As I believe its not possible through browser, I tried to simulate using curl but it wouldn't work.
it is considered 2 separate request to curl
curl -A Mozilla h ttp://abc-xyz-1234.com/ http://xxxxxxxxx.com
invalid domain
curl -A Mozilla h ttp://abc-xyz-1234.comhttp://xxxxxxxxx.com
it will hit host with path /http://xxxxxxxxx.com and get rejected. Attacker is manage to send it without prefix "/" and thats what trying to simulate
curl -A Mozilla h ttp://abc-xyz-1234.com/http://xxxxxxxxx.com
You can use --request-target for this:
curl -A Mozilla http://abc-xyz-1234.com --request-target http://xxxxxxxxx.com
I am trying to configure https to my website hosted on IIS on a Windows Server 2012. My project is a default WebApi application from ASP.NET Core 2.1 and I am using win-acme to configure IIS with a Let’s Encrypt certificate.
Everything runs fine besides the fact that http traffic is not being redirected to https. If I check the logs, I get:
warn: Microsoft.AspNetCore.HttpsPolicy.HttpsRedirectionMiddleware[3]
Failed to determine the https port for redirect.
I thought that this could be easily solved by adding this to my ConfigureServices method:
services.AddHttpsRedirection(options =>
{
options.RedirectStatusCode = StatusCodes.Status307TemporaryRedirect;
options.HttpsPort = 443;
});
However, now I can’t access my website at all. Chrome gives me a message:
This page isn't working www.example.com redirected you too many times
If I check the logs it shows me a bunch of lines like this:
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://example.com
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
Request finished in 0.1026ms 307
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET http://example.com
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
Request finished in 0.1837ms 307
Does anyone have any idea what I am doing wrong?
I just found out the answer. This can be solved by adding the following code to ConfigureServices.cs:
services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardLimit = 2;
options.KnownProxies.Add(IPAddress.Parse("127.0.10.1"));
options.ForwardedForHeaderName = "X-Forwarded-For-My-Custom-Header-Name";
});
Source: https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-2.1#forwarded-headers-middleware-options
I have a Tornado server running on some port
And if I make a request via browser to non existing url, Tornado prints:
WARNING:tornado.access:404 POST /some_url/ (MY.REAL.IP) 0.64ms python
But I noticed another 404 error done via localhost:
WARNING:tornado.access:404 POST /some_url/ (127.0.0.1) 0.64ms python
Is it possible in theory, that this request was done by some "cool hacker" from remote server using curl --resolve or something?
The only way this address should be spoofable would be if you set xheaders=True in your HTTPServer constructor. If you use xheaders=True, you should also be using a frontend proxy that sanitizes headers appropriately so it will not allow X-Real-IP headers from outside sources.
I'm almost newby to nodejs. I'm working on a small nodejs micro-service and its running well. But as per recent requirement this service need to support HTTP/1.1 pipeline. But I'm failing to find in nodejs doc that how to enable/support that.
Please guide me find appropriate doc/module/resource to implement HTTP/1.1 pipeline.
Thanks.
Comments from #shaochuancs and #Helen are about nodejs http client.
If you need a server implementation of HTTP pipeline that depends entirely on the the nodejs core library.
HTTP server-side pipelining support is built-in and already OK in nodejs (I've just made the tests on tested on v5.5.0 v7.0.9 and v6.2.1).
To test pipelining support simply chain two HTTP request in the same tcp/ip connection. You can do it using telnet or netcat (nc).
# telnet, connecting to port 80, chaining 2 requests on /login
# for host foo.com
(echo -en "GET /login HTTP/1.1\nHost: foo.com\nConnection: keep-alive\n\nGET /login HTTP/1.1\nHost: foo.com\n\n"; sleep 10) | telnet localhost 80
# same thing using printf and netcat
printf "GET /login HTTP/1.1\r\nHost: foo.com\r\nConnection: keep-alive\r\n\r\nGET /login HTTP/1.1\r\nHost: foo.com\r\n\r\n" | nc -q 10 localhost 80
Then count the number of responses, you should get 2 (or 1 if pipelining is not supported). Search for 'HTTP/1.1 200 OK' in the output.
I want to verify that my web application does not have a path traversal vulnerability.
I'm trying to use curl for that, like this:
$ curl -v http://www.example.com/directory/../
I would like the HTTP request to be explicitly made to the /directory/../ URL, to test that a specific nginx rule involving proxy is not vulnerable to path traversal. I.e., I would like this HTTP request to be sent:
> GET /directory/../ HTTP/1.1
But curl is rewriting the request as to the / URL, as can be seen in the output:
* Rebuilt URL to: http://www.example.com/
(...)
> GET / HTTP/1.1
Is it possible to use curl for this test, forcing it to pass the exact URL in the request? If not, what would be an appropriate way?
The curl flag you are looking for is curl --path-as-is .
I'm not aware of a way to do it via curl, but you could always use telnet. Try this command:
telnet www.example.com 80
You'll see:
Trying xxx.xxx.xxx.xxx...
Connected to www.example.com.
Escape character is '^]'.
You now have an open connection to www.example.com. Now just type in your command to fetch the page:
GET /directory/../ HTTP/1.1
And you should see your result. e.g.
HTTP/1.1 400 Bad Request
You can use an intercepting proxy to capture a request to your application and repeat the request with parameters changed, such as the raw URL that is requested from the application.
The free version of Burp Suite will allow this using the Repeater.
However, there are alternatives that should also allow this such as Zap, WebScarab and Fiddler2.