So to check this out we did a few packet traces while executing dns queries (using nslookup) which both worked and did not work. It quickly became apparent that it was replies of over 512 bytes that were having problems... But why ?!
So i reviewed my firewall configs and for the ruleset in question there was nothing defined in my access-lists which was denying this traffic, for the particular connection we were allowing ip any any to traverse the firewall. So then i looked at my service-policy and compared it to the cisco best practis doc here:
http://www.cisco.com/web/about/security/intelligence/dns-bcp.html
Everything checked out, this was the config:
policy-map type inspect dns preset_dns_map
parameters
message-length maximum 512
Given the packet sizes we had already seen, i was certain that my problem was directly related to the message-length 512 parameter. I just didnt want to go change it without understanding. So i read (scanned through) these RFC's:
- http://www.ietf.org/rfc/rfc1035.txt (relating to the original DNS spec, seems to confirm 512byte message size is correct, suggesting the Cisco Best Practise is correct also)
- http://www.ietf.org/rfc/rfc2671.txt (relating to EDNS, ahhh so the message lenght between EDNS compatible client / servers can be longer.. MUCH longer)
~> dig +short rs.dns-oarc.net txt
;; connection timed out; no servers could be reached
Bit more research suggested that my bind DNS install by default advertises a edns-udp-length of 4096 ... http://www.zytrax.com/books/dns/ch7/hkpng.html , didnt know that! But at least now its all falling into place.
Updated my FW policy to this knowing i needed a 4K message-length.
policy-map type inspect dns preset_dns_map
parameters
message-length maximum 4096
And now it all works:
~>dig +short rs.dns-oarc.net txt
rst.x3827.rs.dns-oarc.net.
rst.x3837.x3827.rs.dns-oarc.net.
rst.x3843.x3837.x3827.rs.dns-oarc.net.
"10.0.0.1 DNS reply size limit is at least 3843"
"Tested at 2010-04-21 11:57:32 UTC"
"10.0.0.1 sent EDNS buffer size 4096"
I guess the alternative would have been to set a more restrictive edns-udp-size as an option in my bind config... eg:
options {
edns-udp-size 512;
}
But given the way EDNS likes to sometimes have larger packet sizes this just seems inefficient.
I found there was a surprising lack of clear and concise information on this problem online, and given the default message-length when inspecting DNS with a Cisco asa is 512 bytes i'd be interested to hear if anybody else as run into these problems.