centralizing network configuration
Paul Lussier
p.lussier at comcast.net
Mon May 28 11:56:09 EDT 2007
Bill McGonigle <bill at bfccomputing.com> writes:
> Request for Thoughts:
>
> I find myself frequently duplicating the same information when it
> comes to network devices. It goes into dhcpd.conf, one zone file for
> each BIND view, and others, I'm not thinking of at the moment. So,
> when changes/adds/deletes need to happen, all of those files need to
> be updated and one ought not screw up any of them.
>
> Some older attempts to make this better look to have been nis,
> netinfo, hesiod, and perhaps others.
It really doesn't get much simpler than either NIS or Hesiod.
However, neither is without deficiencies:
NIS is completely lacking in security (NIS+ was more or less DOA)
and requires at least one slave server per subnet, given that it
depends entirely upon UDP broadcasts.
Hesiod merely takes your /etc/* files and puts them into DNS zone
files, but merely does resolution, there is no authentication involved
with Hesiod, which means you'll need to implement something like
Kerberos.
> The Big Hammer seems to be implementing an LDAP directory
Funny, isn't it, how terms like 'Big Hammer' and 'More trouble than
it's worth' often get applied to protocols which have 'Lightweight'
and 'Simple' in their names :)
> and running everything off of them in a multi-master configuration.
> It feels big and expensive, and not-a- text-file, but maybe I'm
> overstating the case.
LDAP isn't really big and expensive, but, IMO, it's neither simple nor
Light :) It *could* be a text file, if you wanted to it to be. You'll
probably have to convert everything into an LDIF file at some point,
so you could consider keeping the master file in one or more LDIF
files, checked into revision control, and anytime you update it,
re-load the server. Of course, this assumes that your LDAP directory
is read-only, which will prevent things like password changing by the
end user. Or, you could automate the revision control check-in/out
process nightly using a cron job to run a script which would first
check out the master file, then dump the LDAP database to LDIF and
over-write the existing one and finally check that back in.
> Anyway, I'm tempted to come up with a text file that can describe all
> of these characteristics and generate the requisite config files from
> them. But I can't be the first person to want to do this, and I
> haven't yet found that system already out there. Maybe because it's
> a bad idea?
>
> Thoughts/Experiences?
Yeah, this is an area I've given a lot of thought to over the years.
And you're absolutely right, you're not the first person to have to
deal with this. Currently, I've got a certain amount of data
duplication, but we're in the (glacially slow) process of migrating
toward "Something That Works"(tm).
We use DHCP to statically assign IP addresses based on MAC addresses.
We configure the DHCP config files to have at most, 2 pieces of
information unique to each host; the hostname and the mac address. All
IP addresses are kept in DNS. Therefore, a DHCP config clause for any
given host looks like this:
host overpriced.foo.com {
# eth0
hardware ethernet 00:16:cb:97:67:55;
fixed-address overpriced;
}
So, the DHCP files become out MAC<->Hostname mapping, while DNS is our
Hostname<->IP address mapping.
Currently, all the hostnames and MAC addresses are tracked in a
database and the DHCP config files are dynamically generated from a
script which deals with the revision control aspect of the config
files, and starts/stops the DHCP server.
The DNS zone files are still manually edited, but we're hoping to move
that to dynamic generation based on database records as well.
If all your hosts comprise multiple DNS zones, and assuming the IP
addresses are constant between all those zones, you really only need
one set of zone files for all your zones, assuming you make use of the
various DNS macros available which enable this. For instance, the
$ORIGIN macro gets set based on the zone name being loaded from
named.conf. Multiple zones can load the same exact files provided '@'
and $ORIGIN are used properly. This may not be the case for you, but
it's worth mentioning.
If you want to keep things simple, and don't want to go the LDAP route
(and realy who can blame you ;) You might want to considers some other
alternatives:
- you could keep everything in master text-based /etc/files which get
pushed out to every host upon a change. Keeping these files in a
central NFS mounted directory and using a Makefile for this works
quite well.
- there's always (My,Postgre)SQL and a bunch of p(erl,ython) or bash glue.
- As you mentioned, a single text file which describes everything for
each host, but one file seems a bit of a pain. If you craft the
file to be dealt with only via scripts, then it becomes
unreadable/manageable by humans, but keeping it easily manageable
by humans makes it not easily parsable by scripts.
- You could do something like create a text-file database that's both
easily editable by humans and by scripts.
For that last idea, I'm envisioning essentially the normal /etc text
base files, with one caveat, numbering each row with a unique
identifier that remains the same between files which should be
related. For example, you'd have the following files:
- hostnames
- ipAddresses
- macAddresses
Each one is a 2-column table, where the left column is a unique
identifier and the right-hand column is the pertinent data:
hostnames ipAddresses macAddresses
1 overpriced 1 192.168.10.10 1 00:16:cb:97:67:55
2 underpriced 2 192.168.10.11 2 00:16:cb:89:3d:51
This makes it really easy to manipulate the data using the standard
unix command line tools like grep/sed/awk/sort, etc. as well as easily
editing by hand. Scripting the manipulation is simple as well, since
you can, for example, in perl, 'tie' each file together using arrays,
and/or hashes, then change the data live (since the array/hash would
be 'tie'd to the actual file on disk. Or, you can easily make these
files BerkelyDB tables. Obviously what you're doing here is mimicking
the tables of a relational database, but without all the overhead (and
none of the relational data integrity) that comes with using a
database engine.
I don't know what your needs are, how big your environment is, or how
complex you want to get. There are millions of solutions to this
problem. Most of them have been tried and used or discarded for
various reasons. I think a important factor is that a lot of times
this data is very site-dependant, and the management of it becomes
tied to something local as well.
At our site we currently make heavy use of a central NFS mounted
directory containing all our config files and an extremely complex
Makefile. Unfortunately we've essentially re-implemented cfengine,
but to migrate this mess to something like cfengine now would take too
long for not a lot of gain.
I'd be more than happy to discuss this topic in more depth, either on
or off-list. And I'd certainly be very interested in learning more
about the environment which you're trying to manage, since that might
spur more ideas.
--
Seeya,
Paul
More information about the gnhlug-discuss
mailing list