[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Linux-aus] DNS inside firewall.



On Sat, 2003-02-22 at 23:16, Tom Swann wrote: 
> We use an ADSL connection for our internet service, and make use of the ISP's 
> DNS server. At present we don't have a DNS server of our own.
> 
> For staff to access the intranet web they need to use the IP address of the 
> server. This causes confusion for some people.
> 
> I've never set up a DNS before and I think that perhaps I need one to resolve a 
> name to an IP address for the internal server and refer all other addresses to 
> the ISP's DNS.

As Les and Kevin have pointed out, a caching / forwarding nameserver on
your local network will do the trick.

Two things to mention:

1) Naming conventions
   ------------------

First, you'll need a namespace for the servers on your local network.
Since [I presume] you're on a private (ie 192.168.1.x) network you need
to have the answers given that correspond to that internal range. That
may seem like a no brainer, and it is, but often what trips people up
is  when they have servers locally that are ALSO accessed from the
outside internet.

For example, say I have web server www.junk.com, with outside address
63.100.50.21 . On the inside network, the machine is actually
192.168.1.21, and that's how my developers need to connect to it. If
they try to connect to www.junk.com they're going to go all the way out
into the internet and have to turn around and come back again - or
worse, if the firewall isn't up to it, the packets will simply get lost
and you won't be NAT'd in at all. 

This means you need a different name for it. You could use
web1.junk.com, for the inside address but that would mean you would have
inside and outside addresses in the same DNS zone, like this:

63,100.50.21 www.junk.com
192.168.1.21 web1.junk.com

Not a show stopper (bad form perhaps) but it is tiny bit of a security
bad-practice (don't want to give away details of your internal network).
And what happens if you need to connect to that server directly from the
outside (as opposed to www which might be a VIP or load balancer of some
kind). Either way, not scalable.

A better solution would be to use a different domain, or a subdomain.
Here are examples of both:

	63,100.50.20 www.junk.com
	63,100.50.21 web1.junk.com
	63,100.50.22 web2.junk.com
	192.168.1.21 web1.internal.junk.com
	192.168.1.22 web2.internal.junk.com

and so on. Of course, people inside still want to connect to the
"website" and so you'd probably need 

	192.168.1.20 www.internal.junk.com

This example was all germane to an environment where you have internal
servers and internal users on the same private LAN. The above is really
more suited to a [remote] datacenter, but it does illustrate the point.

A simpler way to do it (and more relevant to your situation) is to use
something completely different:

	192.168.1.21 www.localnet

Which you can do if you simply create a top level domain in your DNS
configuration called localnet, and create names for your various
internal servers underneath it.

2) Reliability
   -----------

Most people tend to miss the fact that "browsing the web" is as much
getting name queries answered promptly as it is actually having a
responsive upstream connection. Especially in residential situations,
ISPs tend to have grossly overloaded nameservers - and so all their
clients think "the internet is slow" when in fact the only real problem
is that their upstream DNS server is simply struggling under the load of
too many requests.

You can, therefore, gain a lot by having your own nameserver locally. If
you go all the way, and have a server that goes out the the root
nameservers and works recursively down from there to answer your query,
then you're not dependent at all on the ISPs nameserver. On the other
hand, nameservers do tend to take up a fair bit of memory, and while the
initial cache is populating things will be very slow because there are a
lot of queries to be made. Once it's been running for a few days, it's
great. 

The other variant is to cache locally, but forward to the upstream
nameserver for queries that aren't all ready cached. This isn't bad, and
while it exposes you to more to the overload problem upstream, you do
benefit from the fact that most of the stuff you'd want has all ready
been queried by someone else in your "neighborhood" and so getting the
answer from the ISP is faster than getting it yourself from the global
DNS network.

The thing I really wanted to bring out with respect to reliability is
this: the DNS server HAS to be working ALL the time. You're running a
nameserver locally (be it a complete one or a caching+forwarding one) in
order to be able to have your little private lookups work (www.localnet
or whatever). But now EVERYONE in your office is depending on that
nameserver working. Believe me, if it ever goes down, even for a moment,
you'll hear about it.

Little things like "add a host to domain, restart nameserver, oops I
made a typo, fix fix, restart" result in a few minutes of downtime
during which people will be unable to surf the web. You might wonder at
why they're spending so much time surfing the web instead of working :)
but it always seems that just when your DNS server goes down just
happens to be the one moment that day when EVERYONE wants to be using
the internet. :)

[YES, you can put the upstream server as the secondary, but failing over
to secondary DNS takes time, and that timeout has to expire for every
lookup. For a page with lots of advertisements, total time can quickly
build into the minute+ range, which brings us back to that unhappy
co-worker thing]

So make sure the machine you set DNS up on is reliable, not over tasked
with other things, and has a got UPS. ;) It doesn't have to be a large
machine but you'll save yourself a lot of grief if it's dedicated left
alone.

Hope this helps,

	Andrew

-- 
Andrew Frederick Cowie
Operational Dynamics Consulting Pty Ltd

Australia +61 2 9977 6866  North America +1 646 270 5376

andrew@operationaldynamics.com.au