I am a user of OpenDNS, and I am noticing network traffic to weird subdomains on my local area network. Suppose the "Local Domain Name" setting on my router is named "mynetwork". I am seeing many requests to domains like:
lb._dns-sd._udp.mynetwork
db._dns-sd._udp.mynetwork
b._dns-sd._udp.mynetwork
tvovhvumfcuvo.mynetwork
pqwakwyids.mynetwork
vbqulcywazgwao.mynetwork
wjyuspdzzbac.mynetwork
etc.
If this is not normal traffic how should I discern where my problem lies? Should I install something like "Little Snitch" on my Macs for example?
You may want to check out this answer from menandmice, where they say:
These are queries generated by 'Multicast/Unicast DNS Service Discovery or
Zeroconf', which is a service of Apple 'Bonjour/Rendevous' or Unix Services like
'Avahi'. DNS Queries coming from Port 5353 are DNS queries from a Zeroconf service.
The DNS Service Discovery enabled clients are looking for pointers to services running in their network block 192.0.2.0/24.
This is harmless. If there is not PTR record for the requested ownernames, it
only means that unicast Zeroconf is not configured.
"unicast Zeroconf is not configured" might not be your exact problem, but overall it's nothing to worry about.
Related
Could deploy Bosh and small footprint tanzu application service(tas) in Azure, without using the domains.All Vms are running.Can i access the ccapi and apps manager with the IP address instead of the api.SYSTEMDOMAIN?
The short answer is no. You really, really want to have DNS set up properly.
Here's the long answer that is more nuanced.
All requests to your foundation go through the Gorouter. Gorouter will take the incoming request, look at the Host header and use that to determine where to send the request. This happens the same for system services like CAPI and UAA as it does for apps you deploy to the foundation.
DNS is a requirement because of the Host header. A browser trying to access CAPI or an application on your foundation is going to set the Host header based on the DNS entry you type into your browser's address bar. The cf CLI is going to do the same thing.
There are some ways to work around this:
If you are strictly using a client like curl where you can set the Host header to arbitrary values. In that way, you could set the host header to api.system_domain and at the same time connect to the IP address of your foundation. That's not a very elegant way to use CF though.
You can manually set entries in your /etc/hosts` (or similar on Windows). This is basically a way to override DNS resolution and supply your own custom IP.
You would need to do this for uaa.system_domain, login.system_domain, api.system_domain and any host names you want to use for apps deployed to your foundation, like my-super-cool-app.apps_domain. These should all point to the IP of the load balancer that's in front of your pool of Gorouters.
If you add enough entries into /etc/hosts you can make the cf CLI work. I have done this on occasion to bypass the load balancer layer for troubleshooting purposes.
Where this won't work is on systems where you can't edit /etc/hosts, like customers or external users of software running on your foundation or if you're trying to deploy apps on your foundation that talk to each other using routes on CF (because you can't edit /etc/hosts in the container). Like if you have app-a.apps_domain and app-b.apps_domain and app-a needs to talk to app-b. That won't work because you have no DNS resolution for apps_domain.
You can probably make app-to-app communication work if you are able to use container-to-container networking and the apps.internal domain though. The resolution for that domain is provided by Bosh DNS. You have to be aware of this difference though when deploying your apps and map routes on the apps.internal domain, as well as setting network policy to allow traffic to flow between the two.
Anyway, there might be other hiccups. This is just off the top of my head. You can see it's a lot better if you can set up DNS.
The most easy way to achieve a portable solution is a service like xip.io that will work out of the box. I have setup and run a lot of PoCs that way, when wildcard DNS was something that enterprise IT was still oblivious about.
It works like this (excerpt from their site):
What is xip.io?
xip.io is a magic domain name that provides wildcard DNS
for any IP address. Say your LAN IP address is 10.0.0.1.
Using xip.io,
10.0.0.1.xip.io resolves to 10.0.0.1
www.10.0.0.1.xip.io resolves to 10.0.0.1
mysite.10.0.0.1.xip.io resolves to 10.0.0.1
foo.bar.10.0.0.1.xip.io resolves to 10.0.0.1
...and so on. You can use these domains to access virtual
hosts on your development web server from devices on your
local network, like iPads, iPhones, and other computers.
No configuration required!
so I have 0% experience with web programming, and the project I am working on doesn't have anything to do with it, but I hit a small roadblock and need to solve a small port problem.
So we want to build a cluster of GPU machines on Azure for some Deep Learning calculations, and want to install some applications on them and let our scientists use the app' GUIs to launch and monitor their jobs. The problem is that an app A for example runs on port 5050, but our firewall doesn't let us communicate to unusual ports. The problem is easy to fix from the Azure side, but our IT team won't let us modify our security policies.
That's why I need to find a hacky and fast solution to overcome this, I don't want to spend my whole internship doing something complicated for it, just something that does the job.
What I thought about was to have some kind of server running on the machines (let's say Machine A has public IP address ipbA and private IP ipvA) that when we type "http://ipba/app1" on our browser, the server on A will fetch the page "http://ipva:5050" (or "----://ipba/app2" -> "----://ipva:5051") and display it, but does this work if the page needs to be interactive because we would like to launch jobs?
I have no clue how to do this, if you could please just tell me what I should look into, google and read about, or if there is an easier way to handle it, (maybe some VPN stuff, which I don't prefer since we're moving towards a hybrid cloud architecture and I don't think we would want to VPN into all the different cloud platforms) that would be awesome :)
Two common solutions for your problem:
Set-up a reverse proxy on a standard port (such as 80 or 443 if you want some SSL certificates headaches).
All your domain names will point to the reverse proxy (single IP) but the reverse proxy will forward the traffic transparently to the real servers on their special ports.
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension
For the technical details, in short: you keep in file(s) the configuration for each domain or subdomain and where they should be forwarded.
Chain of events:
User types http://interface-1.company.com
Browser resolves interface-1.company.com (DNS: IP Reverse Proxy)
Browser connects on reverse proxy (port 80)
Reverse proxy reads configuration which says where to forward
Proxy forwards request to realserver.company.com:5050
Realserver relays response to reverse proxy
reverse proxy relays response to browser
I think that is what you are trying to achieve.
Set-up a VPN service which will be connectible through the proxy of your company and provide VPN clients to the end-users. OpenVPN clients can use an HTTPS proxy connection (your company proxy) to establish connexion to a remote VPN.
Once connected on the VPN, everyone uses the VPN's IP address + firewall policy, and are therefore no more restricted by the company's firewalling policy. Any kind of traffic can also be forwarded. This is harder to set up and your security team might not accept it. However, it's a fully functional solution and it can also offer additional security features if implemented properly.
I do not recommend going this way for all the paperwork that would involve.
I'm having the following dilemma, I have a website on IIS with two internal IPs, each one of those IPs are NATed to different external IPs (each IP is from a different ISP). I also configured a RoundRobin DNS Service (two A hosts with the same name but with a different IP). Basically what this does is that the traffic is balanced between the two ISPs, and that's what we want. The thing is that apparently this configuration (DNS Roundrobin) is meant for when you have a cluster of server so each server has its own ISP on its own NIC, so the traffic from the webserver to the client is made over that ISP.
Right now we are being told that no matter where our inbound traffic comes from, the outbound traffic is always through our main WAN, which is also OK, because we have tested that when the primary WAN link is down, the website keeps working on the secondary link.
OK, the question is, do you think there may be problem with this configuration? Is the DNS Rounrobin also useful on this configuration?.
Thanks a lot for your feedback.
normally when you host a web service the responses are much bigger compared to the inbound traffic (normally you receive an HTTP GET/ and deliver the whole content back) - so it would make much more sense to balance the outbound traffic over your ISPs to get value out of your additional bandwidth.
does it make sense - yes - you can loose one ISP and your site is still available (assuming you do Healthchecks on your DNS server to determine if the sites are available before you send the IP address back - if you always deliver both IPs even when one ISP is down it won't help you at all)
it would be better to add an additional server - OR do policy based routing on your single server - so sending the response out of the interface where it was received.
hope that helps!
I have a couple EC2 instances behind an Elastic Load Balancer. These instances serve HTTP requests for a single web site. I recently started looking at the HOST header of the traffic, because I am planning to split my app into virtual hosts.
With some regularity (dozens of times a day), I log a request for a host name that is totally unrelated to my servers. As a couple examples, today I saw requests with the host names ad.adserverplus.com and r1---sn-upfn-hp5e.c.youtube.com. I looked these up and the IP addresses are not the same as any of my servers, nor of the ELB, so I am trying to develop a theory as to how this happens.
I realize that someone could be spoofing the host header, but it happens often enough that I am pretty sure this is not what is going on. My other idea is that somehow there is stale DNS data that just happens to resolve one of those hosts to my IP address, but again this seems like it could happen once in a great while but not regularly. What are some other possibilities, and how might I verify / discredit them?
EDIT
I looked at some of the unexpected host names today, and it seems that they actually do resolve to an IP that is one of the possible IPs that my domain apex resolves to. I use Route 53 for DNS, and I have the zone apex pointed to the ELB, so when I query the IP address for my domain, I get different answers depending on when I ask. So this makes me very curious, how do these IP addresses get assigned to me and how does EC2 make sure they are not co-opting an IP address that someone else is already using.
There are any number of reasons for this. First you should understand that the public host name for your EC2 instances and load balancers have likely been used before. If you have an elastic IP associated with your load balancer, it has also probably been used before.
As such you can get traffic to your servers that is intended for a previous tenant of that hostname of IP address that you are currently using.
One thing you can do is to configure your web servers to reject traffic (respond with 403) to traffic that is not arriving with the proper hostname specified or that comes from a specific external host.
Your IP or your ELBs IP may have at one point in time been an open proxy. meaning that someone is hoping that you would forward the requests on to their intended destination.
but in general open port 80 to the internet and all kinds of bots and zombies will visit you with a pretty constant flow of dodgy requests. I would imagine though that the \ec2 IP ranges would be a particularly juicy range to search for poorly patched websites to exploit.
I'm writing a small DNS proxy. It listens for incoming UDP messages on a port and resolves them using a specified DNS (e.g. google's DNS 8.8.8.8) and sends the response back to the client.
I would like to be able to detect the default DNS a machines uses. Every OS has an option to obtain the DNS server address automatically. I was wondering how this is done. Is there a protocol on top of UDP or TCP, or something else entirely?
I'm using C#, but the language isn't important.
Finding which DNS the current computer uses as default is highly dependent on both which OS you use and which language you use. If you use Java or .NET, or another platform independent language you might not need to worry about the OS bit though.
Client computers usually "auto-discover" which DNS to use in the DHCP response from the DHCP server. That is when they receive their IP address they also get which DNS server to use. They might also get addresses to WINS servers and a multitude of custom options.
You can find the DNS server by typing ipconfig/all in coand prompt. This will gove you the address of your DNS server.