Is a DNS SRV record lookup secure? - web

I'm wondering how trustworthy the data from a SRV record lookup is? I have a program that essentially could fall apart if someone were to be able to spoof the SRV response.
If not, are there any precautions that could be taken to make it trustworthy?

The only reliable solution to spoofing seems to be using secure DNS servers for lookup. Currently the secure DNS lookup is provided by many DNS providers eg. cloudflare.

All DNS is completely insecure unless you're specifically using a secure DNS server with an encrypted protocol like DNSCrypt.
Even this may be insecure unless the server you're querying is the authoritative server for the requested resource. If it has to go off and ask another server, the link to the next server may or may not be secure.
Without encryption, everything can be modified and/or intercepted by an attacker like your ISP or anybody else along the way.
ISPs frequently intercept DNS queries in order to be "helpful", although they could just as easily be evil.
So the short answer to your question is "no". SRV lookups aren't secure and no other DNS queries are either.
If your application queries a DNS server you control, over a secure link, it should be fine. If you're just using whatever DNS your ISP provides, probably not.

Related

How to redirect HTTPS to insecure WS?

I may not be doing something right, security wise, so please do inform me.
User visits the site (and is forced to use HTTPS). Valid certificate. They connect via WSS to a "middleman" server. Third-party orgs expose a normal WS server to the internet, and broadcast it's public address/port to the "middleman" server, which is then broadcast to the user.
So, the user, on this HTTPS site, gets a url such as ws://203.48.15.199:3422. Is there any way I can possibly allow this sort of connection to happen?
One such way is to allow the "middleman" to also be a proxy - every third-party server address is assigned a path on the WSS-enabled middeman server. User's would connect to wss://example.com/somepath and I would simply proxy it back to the third-party, insecure websocket. The downside is that I must maintain that proxy, which defeats the purpose of allowing third-parties to even run their own server.
Any ideas?
Is there any way I can possibly allow this sort of connection to happen?
This is a form of active mixed content. All recent browsers, such as Firefox and Google Chrome prohibit this kind of content, much like they prohibit a secure page (HTTPS loaded page) from loading insecure JavaScript.
The reasoning is fairly straight forward: it undermines the purpose of HTTPS in the first place.
The "correct" way to fix this is to force all 3rd party to HTTPS on their websockets as well and avoid the whole issue.
One such way is to allow the "middleman" to also be a proxy
Yes. You could set up a proxy server for your 3rd parties that you want to allow proxying. There are numerous examples of doing this nginx scattered around the internet. A simple example might be:
# untested configuration
server {
listen 443 ssl;
ssl_certificate /etc/ssl/certs/cert.pem;
ssl_certificate_key /etc/ssl/private/cert.key;
location /3rdpartyproxy {
proxy_pass http://203.48.15.199:3422;
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection upgrade;
}
}
Something like this may work, but keep in mind it is not in the best interest of your users. The connection between your proxy and the 3rd party is still insecure and vulnerable to all the same attacks that regular HTTP is.
Another thing to watch out for is to make sure your proxy isn't used incorrect by other parties. I've seen some people set a proxy with a dynamic path and would proxy that traffic to wherever. Something like:
ws://myproxy.example/203.48.15.199/3422
Then configure the web server to proxy it to whatever was in the URL. This seems like a nice solution because it'll just do the right thing depending on whatever the URL is. The major down side to it is, unless you have some kind of authentication to the proxy, anyone can use it to proxy traffic anywhere.
To summarize:
You cannot allow ws: on https: pages.
The best solution is to move the burden on to the 3rd parties. Have the set up proper HTTPS. This is the most beneficial because it fully serves the purpose of HTTPS.
In can proxy it. The caveat being that it is still insecure from your proxy to the 3rd party, and you now need to manage a proxy server. Nginx makes this fairly easy.
If you go the proxy route, make sure it's configured in a manner that cannot be abused.
If I require SSL, does this mean they won't be able to just publish an IP to connect to?
Getting an HTTPS certificate requires a domain name. To be more specific, getting a certificate for an IP address is possible, but it very out of the ordinary and far more effort than getting one for a domain name.
How much more of a burden is it?
That's mostly a matter of opinion, but it will mean picking a domain, registering it, and paying to maintain the registration with a registrar. The cost varies, but a domain can usually be had for less than $15 a year if there is nothing special about the domain.
Can the process of exposing HTTPS be entirely automatic?
Getting an HTTPS certificate is mostly automate-able now. Let's Encrypt offers HTTPS certificates for free (really, free, no strings, is now a wildly popular Certificate Authority). Their approach is a bit different. They issue 3 month certificates, but use software called CertBot that automates the renewal and installation of the certificate. Since the process is entirely automated, the duration of the certificate's validity doesn't really matter since it all just gets renewed in the background, automatically.
There are other web servers that use take this a step further. Apache supports ACME (the protocol that Let's Encrypt uses) natively now. Caddy is another web server that takes HTTPS configuration to the absolute extreme of simplicity.
For your 3rd parties, you might consider giving them examples of configuration that have HTTPS properly set up for various web servers.
I would love just a little more description on "certs for IP addresses" possibility as I feel you glossed over it.
IP addresses for certificates are possible but are exceedingly rare for publicly accessible websites. https://1.1.1.1 is a recent example.
Ultimately, a CA has to validate that the Relying Party (the individual or organization that is requesting a certificate) can demonstrate control of the IP address. Section 3.2.2.5 of the CA/B Requirements describes how this is done, but practically it is left up to the CA to validate ownership of the IP address. As such "Domain Validated" certificates are not eligible for certificates with IP Addresses. Instead, "Organization Validated" certificates, or OV, is required at a minimum. OV certificate issuance is more stringent and cannot be automated. In addition to proving ownership of the IP address, you will need to provide documentation to the identity of the Relying Party such as a drivers license for an individual, or tax documentations for an organization.
Let's Encrypt does not issue certificates for IP addresses, so you would likely have to find a different CA that will issue them, and likely pay for them. Registering a Domain Name for the IP address will likely come out to be more cost effective.
I'm not aware of any CA that will issue certificates for an IP address in an automated fashion like Let's Encrypt / CertBot.
Using a domain name affords more flexibility. If an IP address needs to be changed, then it is a simple matter of updating the A/AAAA records in DNS. If the certificate is for the IP address, then a new certificate will need to be issued.
There have been historical compatibility issues with IP addresses in certificates for some software and browsers.
An idea, but you still need your server to behave as a reverse-proxy : instead of example.com/somepath, implement thirdparty.example.com virtual host for each of your third parties. You then need a wildcard certificate *.example.com, but this prevents people trying to access example.com/somepathdoesntexist

Why can't I spoof Facebook with my own DNS server?

Reading a lot about servers, load balancing and similar topics, a question came to mind.
DNS servers are servers which gives you the IP for a given domain name. Is there a "dictator" knowing all the valid DNS servers in the world? If I want to make a DNS server, and someone requests a website it doesn't have. How would it know which other DNS to redirect the request to? What if I tell facebook.com to have a spoof IP, and everyone getting the IP from my DNS server would be communicating with a spoof facebook server? Obviously, this isn't how it works (at least not at a big degree), because then someone would have done it already to attack hundreds of people.
When one registers a domain, one has to specify the name server for that domain. What happens during this process? Is a request sent to this DNS server to notify it there is a new domain to save in the database? If so, how can anyone own the top domains like .com? And why cannot I for example make my own top domain name if I can make my own DNS server?
After looking at nginx as a load balancing system, I'm starting to wonder a bit. Is it so that a request to http://www.google.com/ works like this? The computer asks a DNS server for the IP address for google.com, and then requests it? This will only be one IP, and all requests to Google ends up at this one server? And then this IP will be connected to a nginx server, or a more basic hardware unit to route the request internally to other servers? So all requests go to one server before it redirects the request to a data center?
After looking up google.com, it says the name servers are ns1.google.com etc.. But what is the point of them, if you need a different name server to get to ns1.google.com in the first place?
Obviously what I've written doesn't make sense, because if it were true, the web as a whole would be unusable because of people exploiting the possibilities for malicious causes. And I can't imagine how ONE server could handle ALL the requests thrown at google.com.
I've tried searching Google, but all I get is theoretical explanations that led me to where I am now. It would have been great if someone would point me to some articles that explain this thoroughly, and hopefully a lot of other people will find this question useful.
Anyone can run a DNS server, but the challenge is getting someone to use it. Normally the DNS server IP is provided as a DHCP option or is statically assigned. If you can get someone to use your server, you can return any IP for any hostname, including creating new top-level domains (subject to any filtering at the client, of course. Web browsers might have difficulty with a new TLD, for example). Note that with DNSSEC, this will eventually change, as the name record will be digitally signed and your server won't be able to fake the signature exactly.
DNS servers operate in a tree. When one server receives a request for a domain it does not control, it forwards the request on to another DNS server. The other DNS server may be the one which returns the IP (this is called the authoritative server), or it may return a NS record which points to another server which then must be queried. The DNS root servers provide for resolving TLDs.
A DNS server does not need to always return the same IP for a given name. It may choose to return a different IP based on region, client IP, or even per-request. This is the most typical way to load balance. Multiple DNS servers can also load balance the DNS requests by using anycast routing, where many servers share the same public IP and traffic is routed to them randomly by publishing multiple routes for the same IP.

Can the DNS Server have source IP?

Short Question :
Since DNS is anycast, is there any way for a DNS Server to know the "first" source DNS Query originated from?
Long Question :
I've developed a custom DynDNS server using PowerDNS, I want to feed it information via web interface by users. I want the web interface to update records for each user "based on IP".
So when the DNS Server gets requests, If it could determine the source IP, it'd be easy to return records associated with that IP.
As long as I tested, the DNS Server can only know the "last" node IP on the DNS chain, not the source. Is there any way?
Regards
Google and Yahoo! submitted a draft (draft-vandergaast-edns-client-ip-01) to the IETF DNS Extensions Working Group that proposed a new EDNS0 option within DNS requests that recursive servers could use to indicate their own client's IP address to the upstream authoritative server.
The intent was to theoretically optimise the use of Content Delivery Networks by ensuring that the web server addresses returned were based on the end user's IP address, rather than on the address of the end user's DNS server.
The idea was not well received and wasn't accepted by the working group because it intentionally broke the caching layer of the DNS, and the draft has subsequently expired.
UPDATE - a variation on this has subsequently been published as RFC 7871.
Perhaps you have control of the software performing the lookup? If so, you could include the IP address as part of the request, e.g.
23-34-45-56.www.example.com
to which your custom-written server replies
23-34-45-56.www.example.com 1800 CNAME www-europe.example.com
or
23-34-45-56.www.example.com 300 A 34.45.56.67
etc.
If the client is a web browser, complications arise due to NAT, HTTP proxies, and the inability to query host interface addresses directly from Javascript. However, you might be able to do an AJAX-style lookup to a what's-my-ip service, which understands X-Forwarded-For.
Long answer to Short Question :
DNS is not anycast. Some content DNS server owners use anycasting to distribute servers in multiple physical locations around the world, but the DNS/UDP and DNS/TCP protocols themselves are not anycast. The notion simply doesn't exist at that protocol layer.
Short answer to Long Question :
No.
Expansion
As noted, there's nothing in the DNS protocol for this. Moreover, the relationship between front-end and back-end transactions at a caching resolving proxy DNS server is not one-to-one.
You'll have to use whatever client differentiation mechanisms exist in the actual service protocol that you're using, instead of putting your client differentiation in the name→IP address lookup mechanism. Client differentiation for other services doesn't belong in name→IP address lookup, anyway. Such lookup is common to multiple protocols, for starters. Use the mechanisms of whatever actual service protocol is being used by the clients who are communicating with your servers.

How secure is IP address filtering?

I'm planning to deploy an internal app that has sensitive data. I suggested that we put it on a machine that isn't exposed to the general internet, just our internal network. The I.T. department rejected this suggestion, saying it's not worth it to set aside a whole machine for one application. (The app has its own domain in case that's relevant, but I was told they can't block requests based on the URL.)
Inside the app I programmed it to only respect requests if they come from an internal I.P. address, otherwise it just shows a page saying "you can't look at this." Our internal addresses all have a distinct pattern, so I'm checking the request I.P. against a regex.
But I'm nervous about this strategy. It feels kind of janky to me. Is this reasonably secure?
IP filtering is better than nothing, but it's got two problems:
IP addresses can be spoofed.
If an internal machine is compromised (that includes a client workstation, e.g. via installation of a Trojan), then the attacker can use that as a jump host or proxy to attack your system.
If this is really sensitive data, it doesn't necessarily need a dedicated machine (though that is best practice), but you should at least authenticate your users somehow, and don't run less sensitive (and more easily attacked) apps on the same machine.
And if it is truly sensitive, get a security professional in to review what you're doing.
edit: incidentally, if you can, ditch the regex and use something like tcpwrappers or the firewalling features within the OS if it has any. Or, if you can have a different IP address for your app, use the firewall to block external access. (And if you have no firewall then you may as well give up and email your data to the attackers :-)
I would rather go with SSL and some certificates, or a simple username / password protection instead of IP filtering.
It depends exactly HOW secure you really need it to be.
I am assuming your server is externally hosted and not connected via a VPN. Therefore, you are checking that the requesting addresses for your HTTPS (you are using HTTPS, aren't you??) site are within your own organisation's networks.
Using a regex to match IP addresses sounds iffy, can't you just use a network/netmask like everyone else?
How secure does it really need to be? IP address spoofing is not easy, spoofed packets cannot be used to establish a HTTPS connection, unless they also manipulate upstream routers to enable the return packets to be redirected to the attacker.
If you need it to be really secure, just get your IT department to install a VPN and route over private IP address space. Set up your IP address restrictions for those private addresses. IP address restrictions where the routing is via a host-based VPN are still secure even if someone compromises an upstream default gateway.
If your application is checking the IP Address, then it is extremely vulnerable. At that point you don't have any protection at the router which is where IP filtering really needs to be. Your application is probably checking HTTP header information for the sending IP address and this is extremely easy to spoof. If you lock the IP address down at the router, that is a different story and will buy you some real security about who can access the site from where.
If are you are doing is accessing the application internally, then SSL won't buy you much unless you are trying to secure the information from parties internal to the organization, or you require client certificates. This is assuming you won't ever access the site from an external connection (VPNs don't count, because you are tunneling into the internal network and are technically part of it at that point). It won't hurt either and isn't that hard to setup, just don't think that it is going to be the solution to all your problems.
IP Whitelisting is, as others have mentioned, vulnerable to IP spoofing and Man-in-the-Middle attacks. On an MITM, consider that some switch or router has been compromised and will see the "replies". It can either monitor or even alter them.
Consider also vulnerabilities with SSL encryption. Depending on the effort, this can be foiled in a MITM as well, as well as the well-known gaffs with the reuse of the primes, etc.
Depending on the sensitivity of your data, I would not settle for SSL, but would go with StrongSWAN or OpenVPN for more security. If handled properly, these will be a lot less vulnerable to a MITM.
Reliance on whitelisting alone (even with SSL) I would consider "low grade", but may be sufficient for your needs. Just be keenly aware of the implications and do not fall into the trap of a "false sense of security".
If it's limited by IP address, then although they can spoof the IP address, they won't be able to get the reply. Of course, if it's exposed to the internet, it can still get hit by attacks other than against the app.
Just because all your internal IPs match a given regex, that doesn't mean that all IPs that match a given regex are internal. So, your regex is a point of possible security failure.
I don't know what technology you used to build your site, but if it's Windows/ASP.net, you can check the permissions of the requesting machine based on its Windows credentials when the request is made.
Like all security, it's useless on it's own. If you do have to put it on a public-facing webserver, use IP whitelisting, with basic username/password auth, with SSL, with a decent monitoring setup, with an up-to-date server application.
That said, what is the point in having the server be publicly accessible, then restrict it to only internal IP addresses? It seems like it's basically reinventing what NAT gives you for free, and with an internal-only server, plus you have to worry about web-server exploits and the likes.
You don't seem to gain anything by having it externally accessible, and there are many benefits to having having it be internal-only..
My first thought on the ressource issue would be to ask if it wouldn't be possible to work some magic with a virtual machine?
Other than that - if the IP addresses you check up against are either IPs you KNOW belongs to computers that are supposed to access the application or in the local IP range, then I cannot see how it could not be secure enough (I am actually using a similiar approach atm on a project, although it is not incredibly important that the site is kept "hidden").
A proper firewall can protect against IP spoofing, and it's not as easy as say spoofing your caller ID, so the argument /not/ to use IP filtering because of the spoofing danger is a bit antiquated. Security is best applied in layers, so you're not relying on only one mechanism. That's why we have WAF systems, username+password, layer 3 firewalls, layer 7 firewalls, encryption, MFA, SIEM and a host of other security measures, each which adds protection (with increasing cost).
If this is a web application you're talking about (which was not clear from your question), the solution is fairly simple without the cost of advanced security systems. Whether using IIS, Apache, etc. you have the ability to restrict connections to your app to a specific target URL as well as source IP address - no changes to your app necessary - on a per-application basis. Preventing IP-based web browsing of your app, coupled with IP source restrictions, should give you significant protection against casual browsing/attacks. If this is not a web app, you'll have to be more specific so people know whether OS-based security (as has been proposed by others) is your only option or not.
Your security is only as strong as your weakest link. In the grand scheme of things, spoofing an IP is child's play. Use SSL and require client certs.
It became useful first to distinguish among different kinds of IP vpn based on the administrative relationships, not the technology, interconnecting the nodes. Once the relationships were defined, different technologies could be used, depending on requirements such as security and quality of service.
Maybe this will help ? I've been looking for the same answer, and found this stackoverflow as well as this idea from Red Hat Linux Ent. I'll try it soon. I hope it helps.
iptables -A FORWARD -s 192.168.1.0/24 -i eth0 -j DROP
Where 0/24 is the LAN range you want to protect. The idea is to block "internet" facing (Forward) devices from being able to spoof the local IP network.
Ref: http://www.centos.org/docs/4/html/rhel-sg-en-4/s1-firewall-ipt-rule.html

Must all registered domains have domain name servers assigned to them?

If I just want to know if a domain name is reserved; is it sufficient to use this command and see if any domain name servers turn up, in which case it's reserved?
host -t NS example.com
It's a lot faster than visiting http://www.internic.net/whois.html and typing example.com to get much more detailed results, which I'm not interested in anyway.
Absolutely not.
A past employer registered theirname.biz solely for use on the internal network: it had DNS entries on the inward-facing network DNS server, but nowhere on the internet.
I'm not sure the trick was particularly essential, but "imap.theirname.biz" has the advantage over just "imap" that it's unambiguous if you're connected simultaneously to multiple networks (in the absence of deliberate foul play, of course), so you can just use all their internal DNS resolvers. Also the advantage over "imap.theirname.com" that once you know the convention, it's immediately obvious that it's a private server, and hence the reason you can't connect to it is that you forgot to connect VPN. There may have been other benefits to which I was not privy: I'm a coder, not an IT tech...
Various TLDs have differing requirements for whether name servers are provisioned or not. For example ".de" does require that name servers are up and running and correctly configured before they'll allow the domain registration to proceed.
The technical standards for DNS don't require it though, in fact there's nothing in the core DNS specifications to link together the registration of a name with its subsequent operation in the DNS.
Therefore, using whois is probably the most reliable method, with the caveat that you'll need a whois client that's clever enough to figure out which server to talk to for the domain in question.
That said, checking for the appropriate NS record is a very good shortcut to check that a domain is registered, you just can't use the absence of such a record to prove that it isn't!
NS records are not necessarily required for registered domains. The whois service is your most reliable option.
Note that most Unix systems and Mac OS X have a "whois" command line program that is really quick to use:
whois stackoverflow.com
I don't believe that you have to have a DNS pointing to your domain. Even if you had to have a DNS set up, there is no assurance that the box acting as the DNS server isn't down.

Resources