I see something like this in my nodejs app... Am I under hack attack ?
GET http://httpheader.net 301 5.464 ms - 108
GET http://www.httpheader.net/ 200 6.820 ms - -
Thank you very much
The link from support of httpheader.net says
This is usually an indication that either the IP you are on had a
proxy server at one time, your IP is being probed to see if it
contains a proxy server or someone just has their software miss
configured. These can usually be ignored as they pose no direct
threat.
Also the default API routing on your server side is not very well configured. It should be returning 404 if the page was not found on your server instead you are returning 200.
You can block all these requests if you want to.
Related
Need some help to dig deeper into why IIS is behaving in a certain way. Edge/Chrome makes an HTTP2.0 request to IIS, using the IPv6 address in the header (https://[ipv6]/) which results in the server generating a 302 response. The ISAPI filter makes some mods to the 302 response and replaces the response buffer. IIS drops the request/response and logs in HTTPERR log:
<date> <time> fe80::a993:90bf:89ff:1a54%2 1106 fe80::bdbf:5254:27d2:33d8%2 443 HTTP/2.0 GET <url> 1 - 1 Connection_Dropped_List_Full <pool>
Suspect related to HTTP2.0, when putting Fiddler in the middle, it isn't HTTP/2.0 anymore, it downgrades to HTTP/1.1 and it works.
When using an IPv4 address, it works. In either case the filter goes through the identical steps. There is no indication in the filter that anything went wrong.
Failed Request Tracing will not write buffers for incomplete/dropped requests that appear in HTTPERR log.
Is there a place where I can find out more detailed information about why IIS is dropping the request?
I did the network capture, and looks like browser is initiating the FIN tear down of session.
Do you use any load balance or reverse proxy before request get in IIS? This error indicates that the log cannot store more lost connections, so the problem is that your connection is lost.
If you use load balance, web application is under heavy load and
because of this no threads are available to currently provide
logging data to HTTP.sys. Check this.
Or before IIS response to client, client has closed the request but
IIS still send response. This is more likely to be a problem with the
application itself not IIS and http.sys. Check this.
One thing I noticed is if you change http2 to 1.1, it can work well. The difference between http1.1 and 2 is performance.
HTTP/1.1 practically allows only one outstanding request per TCP connection (though HTTP pipelining allows more than one outstanding request, it still doesn’t solve the problem completely).
HTTP/2.0 allows using same TCP connection for multiple parallel requests
So it looks like that when you use http2, one connection includes multiple requests and application cannot handle these requests well, especially the request of image.
Aother thing is failed request tracing can catch all request and response, including status code is 200 and 302.
In our project we're using 2 servers: 1 as a PROD API server and 1 as a proxy(actually nginx is used for that)
The proxy server uses HTTP/2 as well. In one scenario the proxy may get response from prod API server and replace PROD links by Proxy's and then return that to the client.
In that case we can catch the "net::ERR_SPDY_PROTOCOL_ERROR 200" error. I googled little bit about that issue, but it looks like it may be few reasons for that error.
In my case it occurs only when we replace hosts(modify the response from the PROD before sending it to client)
Can someone describe what actually the "net::ERR_SPDY_PROTOCOL_ERROR 200" means and maybe best practices to avoid that?
HTTP/2 is derived from the earlier SPDY protocol, that's probably why the error message doesn't mention HTTP/2 at all.
One of the reasons why you may see the ERR_SPDY_PROTOCOL_ERROR message is an invalid HTTP header coming from the server. Perhaps your proxy is making some change to an HTTP response header which is making it invalid/malformed?
Try to disable HTTP/2 on your proxy server and see if the error goes away. If it does, inspect the response headers and make sure they are valid. I suspect your proxy server is malforming the response.
We met similar issue today when running the reverse proxy server using docker image: nginx:1.16.0-alpine. After changing to use nginx:1.16.0, this issue was solved.
I have tried to send a DNS packet to get an IP of some web-site.
In some cases, like google, the IP was right and when i typed it in the url line it sent me to google.
But in other cases (for example : stackoverflow.com) its gave me an IP that didin't linked to the web-site.
To be sure that my packet is right, i tried to do Nslookap in the command line, and the result was the same.
So i cant find the right IP adress of a web-site.
There is the message that appear when I'm trying to enter stakoverflow
Fastly error: unknown domain: 151.101.65.69.
Please check that this domain has been added to a service.
You (generally speaking) can not open the website just by entering the IP address in your browser's address bar because web servers (and possibly many other network components that are between you and the web server) often do not host only one web site on that IP address so they rely on exact domain name typed in address bar to serve the right content.
I think, it's caused by yours internet restriction. Try to contact your ISP (your internet provider) about this problem. He will probably know more about cause of this problem.
Short answer: you need a host header.
Long answer: Since HTTP/1.1 introduced in 1997 (and then updated in 1999 and in 2014), the request needs a host header. That allows the web server to route a request to a corresponding server configuration, a virtual server in Apache speak. Some servers don't have this configured and is allowing requests to any host to be served from the same web server configuration.
HTTP/1.1 also allowed multi-tenant proxies, as Fastly, to exist in the Internet. Fastly is a CDN - content delivery network - that allows to cache websites content on closer to users and deliver it locally (faster than from a cloud or a colo, thus the name).
When you're not specifying the domain for the request, it looks like your client (or a library) is using the IP address as the host header. That's why the response from Fastly talks about domain: unknown domain: 151.101.65.69.
While Fastly do support service pinning to a dedicated IP address, which would have worked for your request - it doesn't look like stackoverflow is using the feature as they might not need it.
I've been running a website on an Ubuntu EC2 instance that serves a useful tool that I use on the go (just displaying changing data). Mostly my goal here is to learn server security. The server runs a NodeJS server with Express, kept alive with ForeverJS.
Over the past two weeks I've seen some typical weak tries at "hacking", if you would call it that. For example, requests thrown at "/wp-admin" and "/administrator/manifests/libraries/joomla.xml". Recently though, I've been getting requests that look like this:
[0mGET http://robercid.es/ [32m200 [0m1.019 ms - 10669[0m
[0mGET http://api.ipify.org/ [32m200 [0m0.668 ms - 10669[0m
It doesn't look like they go through, but I'm curious as to how this is accomplished, and also what the "hacker" is trying to accomplish.
Also, as for security, I think I've covered everything (SSH Keys, non-20 SSH port, all ports except 80 closed), but is there anything specific to route handling that I should do security wise? Any non-valid page gets a 404, inputs are sanitized, DB is limited to local network.
Its probably not much to worry about. IP addresses in EC2 are part of a big pool and get used by different customers. Sometimes those IP addresses end up hardcoded into applications or are used to resolve cached DNS lookups that are not respecting ttl.
If it becomes a problem, like getting an ip address that was used for a high volume API endpoint, then you can simply stop/start your instance to get a new IP. Or, just request a new elastic ip and assign it to your instance.
I'm running a nodejs webserver on azure using the express library for http handling. We've been attempting to enable cloudflare protection on the domains pointing to this box, but when we turn cloudflare proxying on, we see cycling periods of requests succeeding, and requests failing with a 524 error. I understand this error is returned when the server fails to respond to the connection with an HTTP response in time, but I'm having a hard time figuring out why it is
A. Only failing sometimes as opposed to all the time
B. Immediately fixed when we turn cloudflare proxying off.
I've been attempting to confirm the TCP connection using
tcpdump -i eth0 port 443 | grep cloudflare (the request come over https) and have seen curl requests fail seemingly without any traffic hitting the box, while others do arrive. For further reference, these requests should be and are quite quick when they succeed, so I'm having a hard time believe the issue is due to a long running process stalling the response.
I do not believe we have any sort of IP based throttling or firewall (at least not intentionally?)
Any ideas greatly appreciated, thanks
It seems that the issue was caused by DNS resolution.
On Azure, you can configure a custom domain name for your created webapp. And according to the CloudFlare usage, you need to switch the DNS resolution to CloudFlare DNS server, please see more infomation for configuring domain name https://azure.microsoft.com/en-us/documentation/articles/web-sites-custom-domain-name/.
You can try to refer to the faq doc of CloudFlare How do I enter Windows Azure DNS records in CloudFlare? to make sure the DNS settings is correct.
Try clearing your cookies.
Had a similar issue when I changed cloudflare settings to a new host but cloudflare cookies for the domain was doing something funky to the request (I am guessing it might be trying to contact the old host?)