elijahkurk
(elijahkurk)
1
So let’s take a small office, with a single domain controller, serving dns. Their dc goes down (updates, maintenance, act of god, w/e), and now the office is without a dns server until either the dc comes back up, or an alternate dhcp server is up ( or you specify dns statically). The solution would appear to then have an alternate dns server in the scope, but it has to be identical to the primary, or there’s going to be issues (can’t use gogle or l3, unless you want weird intermittent issues). Is there a reason why there isn’t some sort of failover for dns resolution for small domains?
@Microsoft @Google
6 Spice ups
legoman
(LegoMan)
2
Well no matter how small the site, I would argue there should always be a backup domain controller, and since all DC’s will run an AD integrated DNS server, now you have backup DNS too.
…that is the “failover mechanism” by design, a secondary domain controller. There is nothing else that I know of.
Sure, you could replicate your only AD integrated DNS server to another server running just DNS server - but why, at that point make it a DC and enjoy having a backup of your entire AD too.
Or, just do your only AD server’s maintenance after hours, when nobody will miss the fact the DNS is down.
3 Spice ups
elijahkurk
(elijahkurk)
3
Still have some clients that are on server 2003 and “considering” upgrading their stuff, let alone implementing redundancy for their network. I guess the easy answer is, you reap what you sow, I was just wondering what technical limitations prevent some sort of “secondary” dns, that isn’t even really a forwarder like on the server, but only resolves when a primary dns server is unresponsive for a set amount of time, something longer than a reboot interval.
legoman
(LegoMan)
4
You can’t use the Primary and Secondary DNS fields (set directly on a client workstation, or via DHCP) for fault-tolerant DNS this way. Windows will randomly switch around between Primary, Secondary, Tertiary, etc., and if it switches to the Secondary DNS (which lets pretend that’s your ISP or Google DNS) then internal name resolution would not work, and client workstations may have problems logging on (very slow logon after CTRL+ALT+DEL)
For this reason all DNS servers set on/provided to (via DHCP) client workstations must be AD integrated DNS servers.
The forwarders tab under the Microsoft DNS server is for something different, basically if the local DNS server does not contain the zone (your internal AD fully-qualified-domain-name) then it will go ask the forwarders instead (out on the Internet) to resolve the name, ideally only external names will escape your network (yahoo.com, cnn.com, microsoft.com, etc.) ideally, an internal name should never be tried (escape) out onto the Internet.
4 Spice ups
elijahkurk
(elijahkurk)
5
Yeah, I understand that secondary dns doesn’t work like that as it’s currently implemented, but what prevents there being a failover system rather than simple redundancy?
bobmccoy
(bobmccoy)
6
Just the way you ask this question reveals a fundamental lack of understanding in how DNS works. Any server can be a secondary DNS server doing zone transfers from the DC as the primary. It so happens in AD-integrated DNS, replication is controlled by AD and generally is set so every DS is a DNS server. However, you can easily up another Windows or even a *NIX server to be a conventional secondary.
I don’t know what you’re expecting when you say failover. Resiliency in DNS is based on simple redundancy. If you view it from the client perspective (and that’s where the rubber meets the road), the client will always query its primary server (the first one in it’s list), and if that fails it will query the next in its list. The list either being statically set or via DHCP.
There is no sense of failover or load balancing. It’s a very simple protocol dictated by RFCs that are decades old.
2 Spice ups
elijahkurk
(elijahkurk)
7
It still has to be a server to run dns… Whether it’s right or not, there are plenty of single machine active directory setups out there. If DNS doesn’t work properly for local resources with an internal and external dns being used concurrently, why not have a failover instead? Ignore the secondary dns unless the primary has been down for a long period, and that gets rid of the issues when a client decides to query the external address when the primary is still up. This is something that can be done via a script in a rmm via self healing, but why isn’t this an os level feature?
1 Spice up
mpk
(mpk)
8
It could be done that way. Powershell scripting would be able to make that happen on Microsoft machines. Shell scripts will do that on *nix.
However, the question is ‘why’?
If you build two DC’s, and set up the first DC as the primary DNS in your network, and the second DC as the secondary DNS in your network, the servers and clients will take care of everything correctly.
All clients know this behavior. Phones, IoT, Microsoft, *nix including apple.
If DC1 goes down, DC2 will respond, since the client will try the secondary DNS.
I guess I don’t understand the need to re-invent the wheel.
viroid
(cthomas605)
9
Elijah,
Because thats not how it works. Period. You can piss and moan about it all you want, it’s not going to change anything. 
The ‘standard’ is two DC’s per site, I get that some cheap-bastards wont shell out cash for another server, which is why virtualization is such a great tool. You ‘could’ run your seconday DC in a VM under ESX on some old POS workstation. Hell you could run your data center on POS non-redundant workstations if you wanted. Sometimes you dont have to solve the hardware redundancy issue, just build in redundnancy at the service layer.
It sounds like you work for an MSP, the best advice I can give you, for your own sake, is to get out of the small-business mindset where nothing can be done right because the client isnt willing to pay for it. You need to have a solid understanding of the technology so that you can design better infrastructure using best practices on a tight budget, not only for your client, but for yourself. No one ‘wants’ to get up at O’Dark-Thirty resolve problems that could wait until Monday morning if the service was redundant.
…ct
3 Spice ups
What I do for my small clients (including my small network) is use an external DNS for secondary DNS. This way when the server goes down they at least have internet access. Otherwise you will have to setup a second DNS server. But this is why you have virtualization. You can run two (or more) servers on a single physical.
legoman
(LegoMan)
11
…I can’t believe your actually suggesting this crapshoot of an implementation!
This would mean roughly half of your DNS lookups for internal names are going out to the Internet where they are unable to be resolved, and fail.
3 Spice ups
As Legoman has mentioned, I too would like to know the official MS line on this in relation to secondary DNS server entries
2 Spice ups
gb77787
(gb5102)
13
If you can’t get a secondary DC, I would use the router as the DNS server on clients. On a ZyWALL(I’m sure other business-class routers are similar) you can set up a zone forwarder in the internal DNS so that any lookups for *.internal.mydomain.com are forwarded to the local DC, and any other zones are resolved externally. This way if the DC is down, at least the client PCs still have internet access.
This setup also works great for small branch locations which connect to HQ via VPN and do not have their own local DC. If the VPN goes down or the DC is down, at least everyone can still use the internet.
1 Spice up
If this is truly a “SMALL” network I am not sure why you would need DNS internally without the DC working?? In my small home network, I set my DHCP scope options to use my DC as primary DNS and google as secondary. (That way when rebooting/otherwise working on DC internet access still works.)
If you truly need some internal DNS functionality on a small scale you could edit the hosts file on each workstation to include the ip addresses of the internal resources you need to access. Definately only viable for a small network with minimal changes.
jcLAMBERT
(jcLAMBERT)
15
You mention small sites, so I presume a single DC.
Set DNS to local DC, then use openDNS servers as secondary DNS. At least they can surf the net while the DC is being repaired. Without another AD DC, they will not be using any file servers anyway 
or just virtualized and always have more than 1.
elijahkurk
(elijahkurk)
16
Please read the thread before posting, hell read the op. External secondary dns doesn’t work for internal dns, it will cause issues, it is not a solution. How did someone from Microsoft suggest to do this?
2 Spice ups
Incorrect. Clients will only use their secondary DNS should the primary fail to respond.
If you only have one server then whats wrong with using an external DNS just so you at least have internet connectivity for the clients? What internal DNS do they need if the only server they have and use has failed? This isn’t poor implementation on small networks.
Now if you have more than a single server then you need redundant DNS servers. But thats not what we are discussing.
2 Spice ups
Sorry if I missed something from your OP, but what I read I gathered you have a single server environment. This is why, and only why I suggested using external as secondary. If you do have multiple servers then my question is why do you only have a single DC? You really should have at least two DC’s and both running DNS which would give you failover. You can also have both run your DHCP role using DHCP failover in Windows Server 2012 R2.
1 Spice up
elijahkurk
(elijahkurk)
19
In theory, yes, it should only resolve to the secondary if the primary is unresponsive. In practice, sometimes the secondary dns will be used, even if the primary is perfectly fine. It works fine the vast majority of the time, but it isn’t perfect, and will cause issues with slow logins and email if they got local exchange (trying to get most small clients on 365).
2 Spice ups
legoman
(LegoMan)
20
No - just no - that is not true at all.
Windows clients especially will randomly switch around to use any DNS server, secondary, 3rd, 4th, more if specified - they don’t even always start with the primary.
Now specifying multiple external DNS servers (like your ISP’s or Google DNS, or Norton DNS) in the Forwarders tab of your internal DNS server, that’s acceptable/good practice - but - even there, if you specify too many, or different DNS servers, or slow DNS servers, you can make problems for yourself.
1 Spice up