Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
What are the main differences between BIND9 and Bundy? Is Bundy secure to use? I real on their website that:
The project is currently working on fixing up some loose ends in the
code inherited, clean the code, and to get the initial infrastructure
up and running to support the first Bundy release.
Is it buggy? Should I go with BIND9 or move to Bundy? I am running Debian Wheezy. If it is better than BIND9, how can I complete get rid of BIND9 and have no conflicts with Bundy?
Bundy is a rewrite of the BIND DNS system that is designed for a lot more scalability and security than BIND9. It is more complex to configure than BIND9 and includes much more modularity.
A BIND9 instance normally has the following service running:
Bind9/named
Bundy has the following services:
bundy-auth — Authoritative DNS server. This process serves DNS requests.
bundy-cfgmgr — Configuration manager. This process maintains all of the configuration for Bundy.
bundy-cmdctl — Command and control service. This process allows external control of the Bundy system.
bundy-ddns — Dynamic DNS update service. This process is used to handle incoming DNS update requests to allow granted clients to update zones for which Bundy is serving as a primary server.
bundy-msgq — Message bus daemon. This process coordinates communication between all of the other Bundy processes.
bundy-resolver — Recursive name server. This process handles incoming DNS queries and provides answers from its cache or by recursively doing remote lookups. (This is an experimental proof of concept.)
bundy-sockcreator — Socket creator daemon. This process creates sockets used by network-listening Bundy processes.
bundy-stats — Statistics collection daemon. This process collects and reports statistics data.
bundy-stats-httpd — HTTP server for statistics reporting. This process reports statistics data in XML format over HTTP.
bundy-xfrin — Incoming zone transfer service. This process is used to transfer a new copy of a zone into Bundy, when acting as a secondary server.
bundy-xfrout — Outgoing zone transfer service. This process is used to handle transfer requests to send a local zone to a remote secondary server.
bundy-zonemgr— Secondary zone manager. This process keeps track of timers and other necessary information for Bundy to act as a slave server.
Additionally, Bind9 still includes the DHCP server in it's installation. This feature is no longer in Bundy (it's the one piece that the ISC held onto)
https://ripe68.ripe.net/presentations/208-The_Decline_and_Fall_of_BIND_10.pdf
Starting in 2009, the Internet Software Consortium (ISC) developed a new software suite, initially called BIND10. With release version 1.2.0 the project was renamed Bundy to terminate ISC involvement in the project. [1]
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Note: IP addresses and domain names have been changed to equivalents so as not to attract attacks!
Background
I'm setting up a standalone VPS on which I'll host half a dozen or so domains catering both email and web hosting. I may add additional VPSs later but don't want to register a new FQDN for each new server. I plan to have single domain name with a subdomain created for each server. For example s1.myserverdomain.com and s2.myserverdomain.com. These FQDNs will be used to provide resolvable names for common services like mail.s1.myserverdomain.com.
The first VPS will have two IP addresses, so that I can use it for providing nameserver services as ns1.s1.myserverdomain.com and ns2.s1.myserverdomain.com. Later, when I add another server, I'll split them up.
(You might tell me that this is bad practice to run both nameservers on the same machine, because in the event that one goes down, so will the other, but considering that in that instance, so too will the mail and web hosting, there doesn't seem much point paying for another server just yet.)
What I want to finish up with is with godaddy handling the DNS for myserverdomain.com and creation of nameservers for ns1.s1..., ns2.s1... on my VPS and later will transfer ns2.s1 to ns2.s2. I will set the nameservers for each of the half dozen hosted domains to use those nameservers.
My Configuration
So far I have created the following DNS records at Godaddy for myserverdomain.com in addition to the default records created automatically by Godaddy:
TYPE NAME VALUE
A s1 100.1.1.1
A ns1.s1 100.1.1.1
A ns2.s1 100.1.1.2
A mail.s1 100.1.1.1
A smtp.s1 100.1.1.1
There is a section on Godaddy for setting up hosts. I don't fully understand why this is, as I thought we just needed to create 'A' records to do that? Anyway, these are the hosts I've setup in that section:
HOST IP ADDRESS
s1 100.1.1.1
ns1.s1 100.1.1.1
ns2.s1 100.1.1.1
These records were all created more than 48 hours ago, so have completed propagation.
The VPS Setup
The VPS is running Ubuntu 18.04 with ISPConfig 3.1 installed for the panel. It was setup following "The Perfect Server" tutorial for ISPConfig which included the installation of Bind. The hostname was set to s1.myserverdomain.com from the outset.
The panel currently shows the status of BIND as being "UP".
Current Status
When I head over to mxtoolbox.com and perform a DNS check on s1.myserverdomain.com it reports "No DNS server can be found".
My Question
I need to know what I've done wrong. Are there any records I should have created? Of those I did create, are any unnecessary or wrong? Thanks!
Could be several things, maybe you have port 53 closed, maybe your NS records aren't set up correctly, etc...
You already noted how having the nameservers on the same machine is bad practice. Using a second IP is useless, I wouldn't bother. People can point a subdomains to a different IP address, and some DNS providers will wait a long time if they can't reach you, so even if your server is down for a minute, for some users it will be down for a long time.
If you share your domain name, we can look it up and see what's wrong. You can also do this yourself with tools like zonemaster.net and intodns.com
Lastly, ISPConfig has a good forum on howtoforge.com/community, I recommend it!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am considering developing on the Yocto project for an embedded Linux project (an industrial application) and I have a few questions for those with experience with embedded Linux in general -- Yocto experience a bonus. Just need to get an idea of what is being commonly done in firmware updates.
I have a few requirements, that being authentication, a secure communications protocol, some type of rollback if the update failed. Also, if there is a way to gradually release the patch across the fleet of devices then that would also be interesting as I want to avoid bricked devices in the field.
How do you deploy updates/patches to field devices today – and how long did it take to develop it? Are there any other considerations I am missing?
Although you certainly can use rpm, deb, or ipk for your upgrades, my preferred way (at least for small to reasonably sized images) is to have two images stored on flash, and to only update complete rootfs images.
Today I would probably look at meta-swupdate if I were to start working with embedded Linux using OpenEmbedded / Yocto Project.
What I've been using for myself and multiple clients is something more like this:
A container upgrade file which is a tarball consisting of another tarball (hereafter called the upgrade file), the md5sum of the upgrade file, and often a gpg-signature.
An updater script stored in the running image. This script is responsible to unpack the outer container of the upgrade file, verify the correctness of the upgrade file using md5sum and often to verify a cryptographic signature (normally gpg based). If the update file passes these tests, the updater script looks for a upgrade script inside the update file, and executes this.
The upgrade script inside the update file performs the actual upgrade, ie normally rewrite the non-running image, extracting and rewriting the kernel and if these steps are successful, instruct the bootloader to use the newly written kernel and image instead of the currently running system.
The benefit of having the script that performs the actual upgrade inside the upgrade file, is that you can do whatever you need in the future in a single step. I've made special upgrade images that upgrades the FW of attached modems, or that extracted some extra diagnostics information instead of performing an actual upgrade. This flexibility will payoff in the future.
To make the system even more reliable, the bootloader users a feature called bootcount, which could the number of boot attempts, and if this number gets above a threshold, eg 3, the bootloader chooses to boot the other image (as the image configured to be booted is considered to be faulty). This ensures that of the image is completely corrupt, the other, stored image will automatically be booted.
The main risk with this scheme is that you upgrade to an image, whose upgrade mechanism is broken. Normally, we also implement some kind of restoration mechanism in the bootloader, such that the bootloader can reflash a completely new system; though this rescue mechanism usually means that the data partition (used to store configurations, databases, etc) also will be erased. This is partly for security (not leaking info) and partly to ensure that after this rescue operation the system state will be completely known to us. (Which is a great benefit when this is performed by an inexperienced technician far away).
If you do have enough flash storage, you can do the following. Make two identical partitions, one for the live system, the other for the update. Let the system pull the updated image over a secure method, and write it directly to the other partition. It can be as simple as plugging in a flash drive, with the USB socket behind a locked plate (physical security), or using ssh/scp with appropriate host and user keys. Swap the partitions with sfdisk, or edit the setting of your bootloader only if the image is downloaded and written correctly. If not, then nothing happens, the old firmware lives on, and you can retry later. If you need gradual releases, then let the clients select an image based on the last byte of their MAC address. All this can be implemented with a few simple shellscripts in a few hours. Or a few days with actually testing it :)
#Anders answer is complete exaustive and very good. The only thing I can add as a suggestion to you is to think on some things:
Has your application an internet connection/USB/SD card to store a
complete new rootfs? Working with Linux embedded is not the like write a 128K firmware on a Cortex M3..
Has your final user the capability to do the update work?
Is your application installed in a accessible area remote controlled?
About the time you need to develop a complete/robust/stable solution is not a so simple question, but take notes that is a key point of an application that impact on the market feeling of your application. Especially in early days/month of first deploy where is usual to send updates to fix little/big youth bugs.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Suppose I were to set up an ubuntu machine and install some services and software on it. Further suppose I were to set up another stock ubuntu machine, this time without the additional services and software. I know there are ways of creating installation/setup scripts or taking disk images and such to build large numbers of identical machines, but if I were to programmatically take a file-based diff between the installations and migrate all file additions/changes/removals/etc from the fully configured system to the stock system, would I then have two identical, working systems (i.e. a full realization of the 'everything is a file' linux philosophy), or would the newly configured system be left in an inconsistent state because simply transferring files isn't enough? I am excluding hostname references and such in my definitions of identical and inconsistent.
I ask this because I need to create a virtual machine, install a bunch of software, and add a bunch of content to tools like redmine, and in the near future I'm going to have to mirror that onto another vm. I cannot simply take a disk image because the source I receive the second vm from does not give me that sort of access and the vm will have different specs. I also cannot go with an installation script based approach at this point because that will require a lot of overhead, will not account for the added user content, and I won't know everything that is going to be needed on the first vm until it our environment is stable. The approach I asked about above seems to be a roundabout but reasonable way to get things done so long as it its assumptions are theoretically accurate.
Thanks.
Assuming that the two systems are largely identical in terms of hardware (that is, same network cards, video cards, etc), simply copying the files from system A to system V is generally entirely sufficient. In fact, at my workplace we have used exactly this process as a "poor man's P2V" mechanism in a number of successful cases.
If the two systems have different hardware configurations, you may need to make appropriate adjustments on the target system to take this into account.
UUID Mounts
If you have UUID based mounts -- e.g., your /etc/fstab looks like this...
UUID=03b4f2f3-aa5a-4e16-9f1c-57820c2d7a72 /boot ext4 defaults 1 2
...then you will probably need to adjust those identifiers. A good solution is to use label based mounts instead (and set up the appropriate labels, of course).
Network cards
Some distributions record the MAC address of your network card as part of the network configuration and will refuse to configure your NIC if the MAC address is different. Under RHEL-derivatives, simply removing the MAC address from the configuration will take care of this. I don't think this will be an issue under Ubuntu.
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am confused about DNS caching. I am writing a small forward proxy server and want to use the OS DNS cache on a Linux system.
If I understand correctly, there is DNS caching at the browser level. Then there is DNS caching at the OS level (Windows has it. I am not sure if Linux distros have it by default).
How does a browser or proxy server use OS DNS caching? I am trying to find out if I can rely on Linux for DNS caching instead of doing it on my own inside my proxy.
On Linux (and probably most Unixes), there is no OS-level DNS caching unless nscd is installed and running. Even then, the DNS caching feature of nscd is disabled by default at least in Debian because it's broken. The practical upshot is that your Linux system very probably does not do any OS-level DNS caching.
You could implement your own cache in your application (like they did for Squid, according to diegows's comment), but I would recommend against it. It's a lot of work, it's easy to get it wrong (nscd got it wrong!!!), it likely won't be as easily tunable as a dedicated DNS cache, and it duplicates functionality that already exists outside your application.
If an end user using your software needs to have DNS caching because the DNS query load is large enough to be a problem or the round-trip time to the external DNS server is long enough to be a problem, they can install a caching DNS server such as Unbound on the same machine as your application, configured to cache responses and forward misses to the regular DNS resolvers.
Here are two other software packages which can be used for DNS caching on Linux:
dnsmasq
bind
After configuring the software for DNS forwarding and caching, you then set the system's DNS resolver to 127.0.0.1 in /etc/resolv.conf.
If your system is using NetworkManager you can either try using the dns=dnsmasq option in /etc/NetworkManager/NetworkManager.conf or you can change your connection settings to Automatic (Address Only) and then use a script in the /etc/NetworkManager/dispatcher.d directory to get the DHCP nameserver, set it as the DNS forwarding server in your DNS cache software and then trigger a configuration reload.
Here you have an example of DNS caching in Debian using dnsmasq: Local DNS caching, article on ManageaCloud.
Configuration summary:
/etc/default/dnsmasq
# Ensure you add this line
DNSMASQ_OPTS="-r /etc/resolv.dnsmasq"
/etc/resolv.dnsmasq
# Your preferred servers
nameserver 1.1.1.1
nameserver 8.8.8.8
nameserver 2001:4860:4860::8888
/etc/resolv.conf
nameserver 127.0.0.1
Then just restart dnsmasq.
Benchmark test using DNS 1.1.1.1:
for i in {1..100}; do time dig slashdot.org #1.1.1.1; done 2>&1 | grep ^real | sed -e s/.*m// | awk '{sum += $1} END {print sum / NR}'
Benchmark test using your locally caching DNS forwarder (dnsmasq):
for i in {1..100}; do time dig slashdot.org; done 2>&1 | grep ^real | sed -e s/.*m// | awk '{sum += $1} END {print sum / NR}'
DNS caching is implemented nowadays by systemd-resolved at the OS level :
https://fedoraproject.org/wiki/Changes/systemd-resolved#Caching
Firefox contains a dns cache.
To disable the DNS cache:
Open your browser
Type in about:config in the address bar
Right click on the list of Properties and select New > Integer in the Context menu
Enter 'network.dnsCacheExpiration' as the preference name and 0 as the integer value
When disabled, Firefox will use the DNS cache provided by the OS.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We have setup 3 Virtual Machine server machines that mount the VMs from 2 other storage machines. We mount the VMs from the storage servers to have less data to move when moving the VMs(pause on one server, mount on new server, unpause) and to facilitate snapshots and backup.
We were in the middle of an extended power outage due to storms (the ops team forgot to check that we had fuel in the generator and the don't test it weekly tsk, tsk), so we shut everything down.
After fueling the generator, we started to bring everything up. Big problem.
To NFS mount the storage, NFS wants to do a reverse DNS lookup, but the DNS server is a VM that can't start until the storage is NFS mounted!
We copied the DNS server VM to one of the VM servers locally and started it so we could then bring everything up.
We would like to run NFS without the reverse lookup (everything is on our internal network) but can't find out how to turn off.
Any help is appreciated
Put the IP address of the NFS clients in the /etc/hosts file of the NFS server with a comment like:
# 2009-04-17 Workaround a chicken and egg DNS resolution problem at boot
192.0.2.1 mynfsclient
192.0.2.2 anothernfsclient
Then, add to your runbook "When changing the IP addresses of a machine, do not forget to update the hosts file of the NFS server".
Now, to shut off this stupid DNS test in the NFS server, it depends on the server. You apparently did not indicate the OS or the server type.
I had a similar problem with an old Yellow Machine NAS box - I was having DNS/DHCP fights where the reverse lookups were not matching the forward lookups.
In our case, just putting dummy entries in the NAS box /etc/hosts for all the IPs solved the problem. I don't even need to have correct names for the IPs - just any name for an IP solved stopped mountd from complaining.
(Interesting side note - at least in the older version of Linux on the NAS box, there's a typo in the NFS error message: "DNS forward lookup does't match with reverse " )
Can't you just put the ip address of the server in question in the fstab file and no dns lookup will be required.
It's NFS v4, the problem is that all the requests for access use a reverse DNS lookup to determine the NFS domain for access/security purposes.
I think you can stop this behavior by putting a line in /etc/default/nfs containing:
NFSMAPID_DOMAIN=jrandom.dns.domain.com
This needs to match across all the systems that are sharing/using NFS from each other. See the section about Setting NFSMAPID_DOMAIN, which is to the end of the page which explains what happens when it's not set.
NFSv4 - more fun than a bag of weasels.