Sorry if this is a bit of a newbie question, but I am quite new to VPS and the relatively more complicated set up. I have a VPS set up, and every day or twice a day the site loads for a bout 10 minutes with no luck. Then when it comes back on line its fine after that. Upon logging on to Plesk, the server is up and running, very low CPU usage (0.10 and drops to 0.00 after a few minutes) and around 18% RAM usage.
The MySQLAdmin loads up fine.
So it seems the VPS is running fine.
Is there maybe another reason? The domain is with Daily.co.uk and the VPS is with LCN.com. Could there be another problem somewhere? On daily.co.uk, there are two nameservers set. ns0.etc*** and ns1.etc***. I did a tracert on windows cmd, this traced down to the server, with two timeouts.
I also tried a check on http://dnscheck.pingdom.com/ while the site was slow and this came back fine, except this: Too few IPv4 name servers (1). Only one IPv4 name server was found for the zone. You should always have at least two IPv4 name servers for a zone to be able to handle transient connectivity problems.
Any help would be appreciated. I have tried searching but with no luck.
The recommended diagnostic check for the issue you are experiencing is called a DIG.
On your Windows system, this check is not intrinsically available, but it can be downloaded from http://members.shaw.ca/nicholas.fong/dig/
Once you have installed it, you'll want to run it from the command prompt with the following syntax:
C:> dig -insert your domain here- +trace
This will show you how DNS resolution is happening from your location to the requested end point. Chances are, the error you received is correct. Most DNS setups have several name servers to assign to your domain registration to allow the round-robining of delegated name servers in the event that one becomes unresponsive.
My personal recommendation would be to outsource the DNS to a managed provider. Doing so will increase the availability of the zone, and reduce latency.
Related
Right now, we a have a site that is flooded and the site is very slow. Normally it is working well but someone decided to flood it today. We have other sites on the server and they are not directly affected. We have limited the amount of CPU that the flooded site may use.
We really need a solution to avoid flooding of sites like this case.
I have read this page:
http://www.acunetix.com/blog/articles/8-tips-secure-iis-installation/
It tells about temporarily dening IP addresses but it does not say for how long the IPs are denied access.
In addition, I rather like that the IPs were auto-blocked in IIS or even better in the Windows firewall. Is this possible?
Can someone help so the site can start behaving normally again? :-)
We are using IIS version 8.5 :-)
I am thinking about hosting my own nameservers.
Two different IPs are required for this, and generally it is expected that these will be two different machines because downtime of one's DNS nameservers is evidently "bad".
But I can't find anywhere that will actually tell me the consequences.
If I am running a number of domains on a single server that has close to 100% uptime, is it really a big deal if I run my nameserver on that machine (I have two+ IP addresses that point to that server).
Can someone tell me what the worst case failure is, apart from possibly the DNS being down for a few hours after a downtime for the machine?
If all your name servers are unavailable for a longer period than your zone TTL, your zone will disappear from the Internet. Until at least one name server is brought back online, the zone will not exist. Mail sent to your domain will bounce, attempts to reach your web servers will make the browser go "Nope, no such site" and so on.
Since most people have a domain because they want to use it for something, it ceasing to exist is generally regarded as a problem.
I have an Ansible playbook for spinning up and building a brand new GNU/Linux box and installing vsftpd.
I have a client which needs to send a nightly file over SFTP. I have instructed to send to ftp.example.com.
I need to be able to very quickly run the playbook against any infrastructure provider (such as DigitalOcean, AWS, Rackspace etc) and without any change on the client end still receive the nightly file upload even if (as will be the case) the IP of the server has changed. So, one night the server may be on a DigitalOcean box in NewYork, the next on an AWS box in Ireland.
Now, obviously I could use a DNS name-server provider who has a good API to code against and reset the A-record as a stage of the playbook run. However, this will likely mean that until the client's DNS cache is flushed they will still be seeing ftp.example.com as the previous server.
So, how can I guarantee that this will work without any interaction on the part of the client.
Many thanks
In terms of DNS, you cant. Evan if you set a low TTL on the DNS records (time to live/expire) many DNS services cache short TTLs for up to 72 Hours.
My recommendation is not to have a infrastructure that requires constant changes in IP.
What may be a better solution is to use a distributed service like BitTorrentSync.
You could also host your own BIND server and ask your client to point his DNS at your BIND server, avoiding any third party DNS.
The real solution would be to stick keeping a persistant instance. If money is an issue, you can suspend your instance saving on compute in Amazon AWS
I have a slight problem bit of the back story. recently ive been trying to test out univention which is a linux distribution with the goal of being able to replace Microsoft active directory.
I tested it locally and all went reasonably well after a few minor issues i then decided to test it remotely as the company wants to allow remote users to access this so i used myhyve.com to host it and its now been setup successfully and works reasonably well.
however
my main problem is DNS based as when trying to connect to the domain the only way windows will recognize it is by editing the network adapter and setting ip v4 dns server address to the ip address of the server hosting the univention active directory replacement. although this does allow every thing to work its not ideal and dns look up on the internet are considerably longer. i was wondering if any one had any ideas or have done something similar and encountered this problems before and know a work around. i want to avoid setting up a vpn if possible.
after initially registering the computer on the domain i am able to remove the dns server address and just use a couple of amendments to the HOST file to keep it running but this still leads to having issues connecting to the domain controller sometimes and is not ideal. any ideas and suggestions would be greatly received.
.Michael
For the HOST entries, the most likely issue is, that there are several service records a computer in the domain needs. I'm not sure, whether these can be provided via the HOST file or not but you'll definitely have authentication issues if they are missing. To see the records your domain is using issue the following commands on the UCS system.
/usr/share/univention-samba4/scripts/check_essential_samba4_dns_records.sh
For the slow resolution of the DNS records there are several points where you could start looking. My first test would be whether or not you are using a forwarder for the web DNS requests and whether or not the forwarder is having a decent speed. To check if you are using one, type
ucr search dns/forwarder
If you get a valid IP for either of the UCR Variables, dns/forwarder1, dns/forwarder2 or dns/forwarder3, you are forwarding your DNS requests to a different Server. If all of them are empty or not valid IPs then your server is doing the resolution itself.
Not using a forwarder is often slow, as the DNS servers caching is optimized for the AD operations, like the round robin load balancing. Likewise a number of ISPs require you to use a forwarder to minimize the DNS traffic. You can simply define a forwarder using ucr, I use Google on IPv4 for the example
ucr set dns/forwarder1='8.8.8.8'
The other scenario might be a slow forwarder. To check it try to query the forwarder directly using the following command
dig univention.com #(ucr get dns/forwarder1)
If it takes long, then there is nothing the UCS server can do, you'll simply have to choose a different forwarder from the ucr command above.
If neither of the above helps, the next step would be to check whether there are error messages for the named daemon in the syslog file. Normally these come when you are trying to manually remove software or if the firewall configuration got changed.
Kevin
Sponsored post, as I work for Univention North America, Inc.
Within our office, we have a local server running DNS, for internal related "domains", (e.g. .internal, .office, .lan, .vpn, etc.). Randomly, only the hosts configured with those extensions will stop resolving on the Windows-based workstations. Sometimes it'll work for a couple weeks without issue on one machine, then suddenly stop working, or it'll happen on another 15 times per day. It's completely random for all workstations.
When troubleshooting, I have opened up a command prompt, and issued various nslookup commands for some of these hosts, and they resolve, however I've been told that nslookup uses different "libraries" for name resolution than other applications such as web browsers, email clients, etc.
The only solution thus far, is manually restarting the Windows DNS Client on each workstation when this happens. Issuing the ipconfig /flushdns command multiple times helps every now and then, but is not successful enough to even attempt before restarting the DNS Client.
I have tried two different DNS servers; BIND9, and Windows Server 2003 R2 DNS, and the behavior is the same.
We have a single Netgear JGS524 switch all workstations and servers are connected to within the office, and a Linksys SR224G switch in another department with workstations attached.
In this particular situation, it appears that Windows will randomly start using a secondary name server instead of the primary, even if the primary is available.
My solution: remove the secondary. This is not a great solution as it obviously will kill the whole name resolution if this single name server goes down, but given this network is small and name resolution isn't mission critical (read: it can go down for an hour), this solution is acceptable.