Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We have setup 3 Virtual Machine server machines that mount the VMs from 2 other storage machines. We mount the VMs from the storage servers to have less data to move when moving the VMs(pause on one server, mount on new server, unpause) and to facilitate snapshots and backup.
We were in the middle of an extended power outage due to storms (the ops team forgot to check that we had fuel in the generator and the don't test it weekly tsk, tsk), so we shut everything down.
After fueling the generator, we started to bring everything up. Big problem.
To NFS mount the storage, NFS wants to do a reverse DNS lookup, but the DNS server is a VM that can't start until the storage is NFS mounted!
We copied the DNS server VM to one of the VM servers locally and started it so we could then bring everything up.
We would like to run NFS without the reverse lookup (everything is on our internal network) but can't find out how to turn off.
Any help is appreciated
Put the IP address of the NFS clients in the /etc/hosts file of the NFS server with a comment like:
# 2009-04-17 Workaround a chicken and egg DNS resolution problem at boot
192.0.2.1 mynfsclient
192.0.2.2 anothernfsclient
Then, add to your runbook "When changing the IP addresses of a machine, do not forget to update the hosts file of the NFS server".
Now, to shut off this stupid DNS test in the NFS server, it depends on the server. You apparently did not indicate the OS or the server type.
I had a similar problem with an old Yellow Machine NAS box - I was having DNS/DHCP fights where the reverse lookups were not matching the forward lookups.
In our case, just putting dummy entries in the NAS box /etc/hosts for all the IPs solved the problem. I don't even need to have correct names for the IPs - just any name for an IP solved stopped mountd from complaining.
(Interesting side note - at least in the older version of Linux on the NAS box, there's a typo in the NFS error message: "DNS forward lookup does't match with reverse " )
Can't you just put the ip address of the server in question in the fstab file and no dns lookup will be required.
It's NFS v4, the problem is that all the requests for access use a reverse DNS lookup to determine the NFS domain for access/security purposes.
I think you can stop this behavior by putting a line in /etc/default/nfs containing:
NFSMAPID_DOMAIN=jrandom.dns.domain.com
This needs to match across all the systems that are sharing/using NFS from each other. See the section about Setting NFSMAPID_DOMAIN, which is to the end of the page which explains what happens when it's not set.
NFSv4 - more fun than a bag of weasels.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Note: IP addresses and domain names have been changed to equivalents so as not to attract attacks!
Background
I'm setting up a standalone VPS on which I'll host half a dozen or so domains catering both email and web hosting. I may add additional VPSs later but don't want to register a new FQDN for each new server. I plan to have single domain name with a subdomain created for each server. For example s1.myserverdomain.com and s2.myserverdomain.com. These FQDNs will be used to provide resolvable names for common services like mail.s1.myserverdomain.com.
The first VPS will have two IP addresses, so that I can use it for providing nameserver services as ns1.s1.myserverdomain.com and ns2.s1.myserverdomain.com. Later, when I add another server, I'll split them up.
(You might tell me that this is bad practice to run both nameservers on the same machine, because in the event that one goes down, so will the other, but considering that in that instance, so too will the mail and web hosting, there doesn't seem much point paying for another server just yet.)
What I want to finish up with is with godaddy handling the DNS for myserverdomain.com and creation of nameservers for ns1.s1..., ns2.s1... on my VPS and later will transfer ns2.s1 to ns2.s2. I will set the nameservers for each of the half dozen hosted domains to use those nameservers.
My Configuration
So far I have created the following DNS records at Godaddy for myserverdomain.com in addition to the default records created automatically by Godaddy:
TYPE NAME VALUE
A s1 100.1.1.1
A ns1.s1 100.1.1.1
A ns2.s1 100.1.1.2
A mail.s1 100.1.1.1
A smtp.s1 100.1.1.1
There is a section on Godaddy for setting up hosts. I don't fully understand why this is, as I thought we just needed to create 'A' records to do that? Anyway, these are the hosts I've setup in that section:
HOST IP ADDRESS
s1 100.1.1.1
ns1.s1 100.1.1.1
ns2.s1 100.1.1.1
These records were all created more than 48 hours ago, so have completed propagation.
The VPS Setup
The VPS is running Ubuntu 18.04 with ISPConfig 3.1 installed for the panel. It was setup following "The Perfect Server" tutorial for ISPConfig which included the installation of Bind. The hostname was set to s1.myserverdomain.com from the outset.
The panel currently shows the status of BIND as being "UP".
Current Status
When I head over to mxtoolbox.com and perform a DNS check on s1.myserverdomain.com it reports "No DNS server can be found".
My Question
I need to know what I've done wrong. Are there any records I should have created? Of those I did create, are any unnecessary or wrong? Thanks!
Could be several things, maybe you have port 53 closed, maybe your NS records aren't set up correctly, etc...
You already noted how having the nameservers on the same machine is bad practice. Using a second IP is useless, I wouldn't bother. People can point a subdomains to a different IP address, and some DNS providers will wait a long time if they can't reach you, so even if your server is down for a minute, for some users it will be down for a long time.
If you share your domain name, we can look it up and see what's wrong. You can also do this yourself with tools like zonemaster.net and intodns.com
Lastly, ISPConfig has a good forum on howtoforge.com/community, I recommend it!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
i work on windows 10.
i have made a google cloud linux compute engine with 230gb standard persistent disk,1 GPU(tesla K80) ,13gb memory,2vCPU.
i have installed jupyter notebook and and all deeplearning frameworks and i am able to use it perfectly.
but i dont know how to access the data for deeplearning that is in my computer on the jupyter notebook running on my compute engine instance.
can anybody tell me how to use boot disk and what exactly its use is?
how to access data from my laptop?
I looked into the following links but couldnt understand the terminology.
https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting
https://cloud.google.com/compute/docs/disks/mount-ram-disks
To clarify the terminology:
Persistent disk: it is the same way you add a hard disk to your machine. If you add one more, you have to mount it somewhere inside your filesystem. (e.g. /media/data) You can find about making directory and mounting command on you mentioned documentation (down from 5.)
Ram disk: it will treat extra disk as a memory space (e.g. for high performance computing). This is not consider as storage and will be count as tmpfs that doesn't keep data permanently. You may use if your task requires greater amount of RAM.
(disclaimer: I never use both extra persistent storage.)
In case you cannot find your data in Jupyter, it depends on the location you start jupyter notebook. For example, if you start Jupyter notebook at home directory, you will see data only in home directory. If you have a mounted drive, one way to access to that mount is making softlink to your working directory.
P.s. you can also use software like WinSCP to access to all file system apart from using only Jupyter.
Make sure to set an ingress firewall rule to allow traffic to the GCE instance.
In the console, go to:
networking
VPC network
External IPs
Reserve a static IP address.
Then go to:
VPC network
firewall rules
Create a tag allowing the protocol tcp:9999 from source IP 0.0.0.0/0.
When you create your instance, associate it with both the IP address and the firewall rule.
Here you can find more detailed instructions on how to create firewall rules on a GCP project: https://cloud.google.com/vpc/docs/using-firewalls
I have a slight problem bit of the back story. recently ive been trying to test out univention which is a linux distribution with the goal of being able to replace Microsoft active directory.
I tested it locally and all went reasonably well after a few minor issues i then decided to test it remotely as the company wants to allow remote users to access this so i used myhyve.com to host it and its now been setup successfully and works reasonably well.
however
my main problem is DNS based as when trying to connect to the domain the only way windows will recognize it is by editing the network adapter and setting ip v4 dns server address to the ip address of the server hosting the univention active directory replacement. although this does allow every thing to work its not ideal and dns look up on the internet are considerably longer. i was wondering if any one had any ideas or have done something similar and encountered this problems before and know a work around. i want to avoid setting up a vpn if possible.
after initially registering the computer on the domain i am able to remove the dns server address and just use a couple of amendments to the HOST file to keep it running but this still leads to having issues connecting to the domain controller sometimes and is not ideal. any ideas and suggestions would be greatly received.
.Michael
For the HOST entries, the most likely issue is, that there are several service records a computer in the domain needs. I'm not sure, whether these can be provided via the HOST file or not but you'll definitely have authentication issues if they are missing. To see the records your domain is using issue the following commands on the UCS system.
/usr/share/univention-samba4/scripts/check_essential_samba4_dns_records.sh
For the slow resolution of the DNS records there are several points where you could start looking. My first test would be whether or not you are using a forwarder for the web DNS requests and whether or not the forwarder is having a decent speed. To check if you are using one, type
ucr search dns/forwarder
If you get a valid IP for either of the UCR Variables, dns/forwarder1, dns/forwarder2 or dns/forwarder3, you are forwarding your DNS requests to a different Server. If all of them are empty or not valid IPs then your server is doing the resolution itself.
Not using a forwarder is often slow, as the DNS servers caching is optimized for the AD operations, like the round robin load balancing. Likewise a number of ISPs require you to use a forwarder to minimize the DNS traffic. You can simply define a forwarder using ucr, I use Google on IPv4 for the example
ucr set dns/forwarder1='8.8.8.8'
The other scenario might be a slow forwarder. To check it try to query the forwarder directly using the following command
dig univention.com #(ucr get dns/forwarder1)
If it takes long, then there is nothing the UCS server can do, you'll simply have to choose a different forwarder from the ucr command above.
If neither of the above helps, the next step would be to check whether there are error messages for the named daemon in the syslog file. Normally these come when you are trying to manually remove software or if the firewall configuration got changed.
Kevin
Sponsored post, as I work for Univention North America, Inc.
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am confused about DNS caching. I am writing a small forward proxy server and want to use the OS DNS cache on a Linux system.
If I understand correctly, there is DNS caching at the browser level. Then there is DNS caching at the OS level (Windows has it. I am not sure if Linux distros have it by default).
How does a browser or proxy server use OS DNS caching? I am trying to find out if I can rely on Linux for DNS caching instead of doing it on my own inside my proxy.
On Linux (and probably most Unixes), there is no OS-level DNS caching unless nscd is installed and running. Even then, the DNS caching feature of nscd is disabled by default at least in Debian because it's broken. The practical upshot is that your Linux system very probably does not do any OS-level DNS caching.
You could implement your own cache in your application (like they did for Squid, according to diegows's comment), but I would recommend against it. It's a lot of work, it's easy to get it wrong (nscd got it wrong!!!), it likely won't be as easily tunable as a dedicated DNS cache, and it duplicates functionality that already exists outside your application.
If an end user using your software needs to have DNS caching because the DNS query load is large enough to be a problem or the round-trip time to the external DNS server is long enough to be a problem, they can install a caching DNS server such as Unbound on the same machine as your application, configured to cache responses and forward misses to the regular DNS resolvers.
Here are two other software packages which can be used for DNS caching on Linux:
dnsmasq
bind
After configuring the software for DNS forwarding and caching, you then set the system's DNS resolver to 127.0.0.1 in /etc/resolv.conf.
If your system is using NetworkManager you can either try using the dns=dnsmasq option in /etc/NetworkManager/NetworkManager.conf or you can change your connection settings to Automatic (Address Only) and then use a script in the /etc/NetworkManager/dispatcher.d directory to get the DHCP nameserver, set it as the DNS forwarding server in your DNS cache software and then trigger a configuration reload.
Here you have an example of DNS caching in Debian using dnsmasq: Local DNS caching, article on ManageaCloud.
Configuration summary:
/etc/default/dnsmasq
# Ensure you add this line
DNSMASQ_OPTS="-r /etc/resolv.dnsmasq"
/etc/resolv.dnsmasq
# Your preferred servers
nameserver 1.1.1.1
nameserver 8.8.8.8
nameserver 2001:4860:4860::8888
/etc/resolv.conf
nameserver 127.0.0.1
Then just restart dnsmasq.
Benchmark test using DNS 1.1.1.1:
for i in {1..100}; do time dig slashdot.org #1.1.1.1; done 2>&1 | grep ^real | sed -e s/.*m// | awk '{sum += $1} END {print sum / NR}'
Benchmark test using your locally caching DNS forwarder (dnsmasq):
for i in {1..100}; do time dig slashdot.org; done 2>&1 | grep ^real | sed -e s/.*m// | awk '{sum += $1} END {print sum / NR}'
DNS caching is implemented nowadays by systemd-resolved at the OS level :
https://fedoraproject.org/wiki/Changes/systemd-resolved#Caching
Firefox contains a dns cache.
To disable the DNS cache:
Open your browser
Type in about:config in the address bar
Right click on the list of Properties and select New > Integer in the Context menu
Enter 'network.dnsCacheExpiration' as the preference name and 0 as the integer value
When disabled, Firefox will use the DNS cache provided by the OS.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Currently my network setup is as follows:
1 server, 3 ethernet cards.
eth0 - ISP1
eth1 - ISP2
eth2 - local network.
What would be the proper way of configuring primary and secondary DNS?
Using tinydns.
Current configuration:
2 tinydns services running on the same machine, each configured on a different ip (NS1 = eth0, NS2 = eth1)
each dns configuration contains both records:
NS1:
.domain.lv:10.10.10.10:ns.domain.lv
.domain.lv:20.20.20.20:ns2.domain.lv
#domain.lv:10.10.10.10:mail.didzis.lv:10:256::
#domain.lv:20.20.20.20:mail.domain.lv:20:256::
+www.domain.lv:10.10.10.10
NS2:
.domain.lv:20.20.20.20:ns.domain.lv
.domain.lv:10.10.10.10:ns2.domain.lv
#domain.lv:10.10.10.10:mail.didzis.lv:10:256::
#domain.lv:20.20.20.20:mail.domain.lv:20:256::
+www.domain.lv:20.20.20.20
The second link is more like a backup in case the first one fails and vice versa. Wont this configuration fail if eth1 is down and the www resolves to 20.20.20.20
Thanks!
This kind of configuration can work but there will be issues. What you want to do is make the TTL of the "www.domain.lv." record really low. The TTL tells other DNS servers how long they are allowed to cache the response. The lower you make it, the quicker clients will notice when one of your ISPs is down, but making it lower will also make it so they have to recheck the IP address more often, which will cost time. 300 seconds (5 minutes) might be a reasonable compromise but I would suggest making it longer (like 900 seconds) if you can afford for a failover to take 15 minutes.
By the way, I don't know how you set the TTL for a record in tinydns. I've never used it (and frankly I find its syntax quite cryptic and scary if your transcripts of the zonefiles are anything to go by).
This will all work fine when both ISPs are up.
The major drawback of this solution is that, when one of the ISPs is down, there will be DNS resolution delays no matter what. Lucky clients will try to query the nameserver that's still responding and get back the IP address that works for an answer. Unlucky clients will try to query the nameserver that's down first. This won't work. They will eventually fail over to the one that's still up and get a working IP address, but you must be prepared for a delay of (maybe) several seconds before this happens.