Finding an right IP of a web-site - dns

I have tried to send a DNS packet to get an IP of some web-site.
In some cases, like google, the IP was right and when i typed it in the url line it sent me to google.
But in other cases (for example : stackoverflow.com) its gave me an IP that didin't linked to the web-site.
To be sure that my packet is right, i tried to do Nslookap in the command line, and the result was the same.
So i cant find the right IP adress of a web-site.
There is the message that appear when I'm trying to enter stakoverflow
Fastly error: unknown domain: 151.101.65.69.
Please check that this domain has been added to a service.

You (generally speaking) can not open the website just by entering the IP address in your browser's address bar because web servers (and possibly many other network components that are between you and the web server) often do not host only one web site on that IP address so they rely on exact domain name typed in address bar to serve the right content.

I think, it's caused by yours internet restriction. Try to contact your ISP (your internet provider) about this problem. He will probably know more about cause of this problem.

Short answer: you need a host header.
Long answer: Since HTTP/1.1 introduced in 1997 (and then updated in 1999 and in 2014), the request needs a host header. That allows the web server to route a request to a corresponding server configuration, a virtual server in Apache speak. Some servers don't have this configured and is allowing requests to any host to be served from the same web server configuration.
HTTP/1.1 also allowed multi-tenant proxies, as Fastly, to exist in the Internet. Fastly is a CDN - content delivery network - that allows to cache websites content on closer to users and deliver it locally (faster than from a cloud or a colo, thus the name).
When you're not specifying the domain for the request, it looks like your client (or a library) is using the IP address as the host header. That's why the response from Fastly talks about domain: unknown domain: 151.101.65.69.
While Fastly do support service pinning to a dedicated IP address, which would have worked for your request - it doesn't look like stackoverflow is using the feature as they might not need it.

Related

How to get incoming request ip address using .net Core

I deployed my .NET CORE solution in AZURE environment (PAAS).I used following code snippet there to get client's ip address
dtoItem.LogIP = HttpContext.Connection.RemoteIpAddress.ToString();
I used standard .net core libraries and did necessary changes into Startup.cs as well
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto,
RequireHeaderSymmetry = false,
});
I believe I have implemented everything in correct manner. But still I haven't got accurate client IP address. I am always getting client's public IP instead of his private IP. Since this can be repeated (Same office 2 users have same public IP) I need client's private IP instead of his public IP.
Is it possible to get private IP address in PAAS solution. If it is not possible, is there a way to track client's PC information. (Such as IP Address, MAC address).
Is it possible to get private IP address in PAAS solution?
No it is not possible as shared in this SO post and this answer address this about MACAddress
On the client side javascript, there is no API exposed to get the IPAddress(obviously due to security consideration) .Then you can get the IPAddress on the server side but typically if you are accessing internet from your company,it would go through the corporate proxy and the Ipaddress seen by the server will never be the actual client IP but the proxy server's address. So this is limited on the server side as server only sees the proxy (public IP address).
If it is not possible, is there a way to track client's PC information. (Such as IP Address, MAC address) ?
What you can reliably track is the user agent. Breaking the user agent down, we get the some information about browser ,OS versions. But user agent can easily be spoofed with some browser extension .
If you are looking for browser finger printing or tracking ,have a
look at Panopticlick which shows some more information like
fonts > installed, screen resolution,plugins installed etc to track
any client. fingerprintjs2 javascript library helps to track
using 26 parameters as of today
There is no straight forward answer to this. The thread shared by Rob has some great insights. However, one needs to understand that a lot can happen to the request before it reaches the server. The intermediary networking devices can manipulate the TCP headers so it may not reflect the correct IP Address that you need.
From a solution perspective, this might be perfectly possible, if you develop your own client and log this information somewhere so that you can track it. Otherwise there is no reliable way to get this information.

Intercepting data on DNS server

I want to set up my own DNS server. That is, instead of using google's own public DNS servers 8.8.8.8, I want to use mine; let's say at 195.33.65.97. I want to set this up on a Cent OS server.
However, I want to add a middle layer on the server, whenever the request arrives to my DNS server, I will have control on the request. For example, if it is asking for skype.com, do not process the order.
Can this be done?
This depends on the specific DNS server you are using. However, on Linux, the bind9 server is the most common one. You can intercept / handle a domain using a zone configuration. For example:
zone "skype.com" {
type master​;
file "/path/to/blocked_domains.dns";
};
In the file "/path/to/blocked_domains.dns", you configure how to handle blocked domains (e.g. having it resolve to the address of a server in which you host an error page, for example).
See How to block or sinkhole domains in BIND for additional details.
Yes this can be done. At the very least you could write your own DNS server (it's easier than it sounds)
See Very simple DNS server

Why can't I spoof Facebook with my own DNS server?

Reading a lot about servers, load balancing and similar topics, a question came to mind.
DNS servers are servers which gives you the IP for a given domain name. Is there a "dictator" knowing all the valid DNS servers in the world? If I want to make a DNS server, and someone requests a website it doesn't have. How would it know which other DNS to redirect the request to? What if I tell facebook.com to have a spoof IP, and everyone getting the IP from my DNS server would be communicating with a spoof facebook server? Obviously, this isn't how it works (at least not at a big degree), because then someone would have done it already to attack hundreds of people.
When one registers a domain, one has to specify the name server for that domain. What happens during this process? Is a request sent to this DNS server to notify it there is a new domain to save in the database? If so, how can anyone own the top domains like .com? And why cannot I for example make my own top domain name if I can make my own DNS server?
After looking at nginx as a load balancing system, I'm starting to wonder a bit. Is it so that a request to http://www.google.com/ works like this? The computer asks a DNS server for the IP address for google.com, and then requests it? This will only be one IP, and all requests to Google ends up at this one server? And then this IP will be connected to a nginx server, or a more basic hardware unit to route the request internally to other servers? So all requests go to one server before it redirects the request to a data center?
After looking up google.com, it says the name servers are ns1.google.com etc.. But what is the point of them, if you need a different name server to get to ns1.google.com in the first place?
Obviously what I've written doesn't make sense, because if it were true, the web as a whole would be unusable because of people exploiting the possibilities for malicious causes. And I can't imagine how ONE server could handle ALL the requests thrown at google.com.
I've tried searching Google, but all I get is theoretical explanations that led me to where I am now. It would have been great if someone would point me to some articles that explain this thoroughly, and hopefully a lot of other people will find this question useful.
Anyone can run a DNS server, but the challenge is getting someone to use it. Normally the DNS server IP is provided as a DHCP option or is statically assigned. If you can get someone to use your server, you can return any IP for any hostname, including creating new top-level domains (subject to any filtering at the client, of course. Web browsers might have difficulty with a new TLD, for example). Note that with DNSSEC, this will eventually change, as the name record will be digitally signed and your server won't be able to fake the signature exactly.
DNS servers operate in a tree. When one server receives a request for a domain it does not control, it forwards the request on to another DNS server. The other DNS server may be the one which returns the IP (this is called the authoritative server), or it may return a NS record which points to another server which then must be queried. The DNS root servers provide for resolving TLDs.
A DNS server does not need to always return the same IP for a given name. It may choose to return a different IP based on region, client IP, or even per-request. This is the most typical way to load balance. Multiple DNS servers can also load balance the DNS requests by using anycast routing, where many servers share the same public IP and traffic is routed to them randomly by publishing multiple routes for the same IP.

Can the DNS Server have source IP?

Short Question :
Since DNS is anycast, is there any way for a DNS Server to know the "first" source DNS Query originated from?
Long Question :
I've developed a custom DynDNS server using PowerDNS, I want to feed it information via web interface by users. I want the web interface to update records for each user "based on IP".
So when the DNS Server gets requests, If it could determine the source IP, it'd be easy to return records associated with that IP.
As long as I tested, the DNS Server can only know the "last" node IP on the DNS chain, not the source. Is there any way?
Regards
Google and Yahoo! submitted a draft (draft-vandergaast-edns-client-ip-01) to the IETF DNS Extensions Working Group that proposed a new EDNS0 option within DNS requests that recursive servers could use to indicate their own client's IP address to the upstream authoritative server.
The intent was to theoretically optimise the use of Content Delivery Networks by ensuring that the web server addresses returned were based on the end user's IP address, rather than on the address of the end user's DNS server.
The idea was not well received and wasn't accepted by the working group because it intentionally broke the caching layer of the DNS, and the draft has subsequently expired.
UPDATE - a variation on this has subsequently been published as RFC 7871.
Perhaps you have control of the software performing the lookup? If so, you could include the IP address as part of the request, e.g.
23-34-45-56.www.example.com
to which your custom-written server replies
23-34-45-56.www.example.com 1800 CNAME www-europe.example.com
or
23-34-45-56.www.example.com 300 A 34.45.56.67
etc.
If the client is a web browser, complications arise due to NAT, HTTP proxies, and the inability to query host interface addresses directly from Javascript. However, you might be able to do an AJAX-style lookup to a what's-my-ip service, which understands X-Forwarded-For.
Long answer to Short Question :
DNS is not anycast. Some content DNS server owners use anycasting to distribute servers in multiple physical locations around the world, but the DNS/UDP and DNS/TCP protocols themselves are not anycast. The notion simply doesn't exist at that protocol layer.
Short answer to Long Question :
No.
Expansion
As noted, there's nothing in the DNS protocol for this. Moreover, the relationship between front-end and back-end transactions at a caching resolving proxy DNS server is not one-to-one.
You'll have to use whatever client differentiation mechanisms exist in the actual service protocol that you're using, instead of putting your client differentiation in the name→IP address lookup mechanism. Client differentiation for other services doesn't belong in name→IP address lookup, anyway. Such lookup is common to multiple protocols, for starters. Use the mechanisms of whatever actual service protocol is being used by the clients who are communicating with your servers.

DNS-based strategies for showing a nice "Currently Offline" page when the server is down

How can I make that a site automagically show a nice "Currently Offline" page when the server is down (I mean, the full server is down and the request can't reach IIS)
Changing the DNS manually is not an option.
Edit: I'm looking to some kind of DNS trick to redirect to other server in case the main server is down. I can make permanent changes to the DNS, but not manually as the server goes down.
I have used the uptime services at DNSMadeEasy to great success. In effect, they set the DNS TTL to a very low number (5 minutes). They take care of pinging your server.
In the event of outage, DNS queries get directed to the secondary IP. An excellent option for a "warm spare" in small shops with limited DNS requirements. I've used them for 3 years with not a single minute of downtime.
EDIT:
This allows for geographically redundant failover, which the NLB solution proposed does not address. If the network connection is down, both servers in a standard NLB configuration will be unreachable.
Some server needs to dish out the "currently offline page", so if your server is completely down, there will have to be some other server serving the file(s), so either you can set up a cluster of servers (even if just 2) and while the first one is down, the 2nd is configured only to return the "currently offline page". Once the 1st server is back up, you can take down the 2nd safetly (as server 1 will take all the load).
You probably need a second server with 100% uptime and then add some kind of failover load balancer. to it, and if the main server is online redirect to that and if it isn't redirect to itself showing a page saying server is down
I believe that if the server is down, there is nothing you can do.
The request will send up a 404 network error because when the web address is resolved to an IP, the IP that is being requested does not exist (because the server is down). If you can't change the DNS entry, then the client browser will continue to hit xxx.xxx.xxx.xxx and will never get a response.
If the server is up, but the website is down, you have options.
EDIT
Your edit mentions that you can make a permanent change the IP. But you would still need a two server setup in order to achieve what you are talking about. You can direct the DNS to a load balancer which would be able to direct the request to a server that is currently active. However, this still requires 100% uptime for the server that the DNS points to.
No matter what, if the server that the DNS is pointing to (which you must control, in order to redirect the traffic) is down, then all requests will receive a 404 network error.
EDIT Thanks to brian for pointing out my 404 error error.
Seriously, DNS is not the right answer to server load-balancing or fail-over. Too many systems (including stub clients and ISP recursive resolve) will cache records for much longer than the specified TTL.
If both servers are on the same network, use routing protocols to achieve fail-over by having both servers present the same IP address to the network, but where the fail-over server only takes over if it detects that the (supposedly) live server is offline.
If the servers are Unix, this is easily done by running Quagga on each server, and then using OSPF as the local routing protocol. I've personally used this for warm standby servers where the redundant system was actually in another data center, albeit one that was connected via a direct link to the main data center.
Certain DNS providers, such as AWS's Route 53, have a health-check option, which can be used to re-route to a static page. AWS has a how-to guide on setting this up.
I'm thinking if the site is load balanced the load balancer itself would detect that the web servers it's trying to redirect clients to are down, therefore it would send the user to a backup server with a message dictating technical problems.
Other than that.....
The only thing I can think is to control the calling page. Obviously that won't work in all circumstances... but if you know that most of your hits to this server will come from a particular source, then you could add a java script test to the source, and redirect to a "server down" page that is generated on a different server.
But if you are trying to handle all hits, from all sources (some of which you can't control), then I think you are out of luck. As other folks are saying - when a server is down, the browser gets a 404 error when it attempts a connection.
... perhaps there would be a way at a point in between to detect 404 errors being returned by servers and replacing them with a "server is down" web page. You'd need something like an HTML firewall or some other intermediate network gear between the server and the web client.

Resources