Replacing NIS [was:NIS - "Could not read ypservers map" during "make"]

Mark Komarinski mkomarinski at wayga.org
Wed Jun 18 13:47:38 EDT 2003


On Wed, Jun 18, 2003 at 01:22:06PM -0400, pll at lanminds.com wrote:
> 
> >>>>> On Wed, 18 Jun 2003, "Derek" == Derek Martin wrote:
> 
>   Derek> Another way is to set up an NIS-like master-slave
>   Derek> relationship with the master and one host on each subnet
>   Derek> where systems which need the files live.  The "master" pushes
>   Derek> the files out to the "slaves" which in turn push the files
>   Derek> out to each of the systems on their subnet.  This keeps the
>   Derek> vast majority of traffic on local subnets and obviously
>   Derek> serializes much of the transfers.
> [...snip...]
>   Derek> Also, since all of this will be scripted, it's easy to
>   Derek> determine which hosts did not receive updates, and notify the
>   Derek> sysadmin team of problems so they can (hopefully) react and
>   Derek> fix it before anyone notices.
> 
> Also, this could be configured to be a 'pull' system where, when the 
> clients boot, they contact the "ypserver" and pull the files over.
> I'm thinking of a 2-way file transmission scenario here where the 
> servers provide both push and pull capability using both rdist and 
> rsync.
> 
> 2 types of servers:  Master - the primary system where all changes
>                               are centralized and pushed out from
> 
>                      Slaves - one or more per subnet which get changes
>                               propagated to them by the master and push
>                               the new files out to the clients
> 
> In addition, both masters and slaves would run an rsync:: server.
> At boot time, the slaves would attempt to contact the server to pull 
> the latest files over, and only the deltas at that, not the entirety 
> of every file.
> 
> A normal client works exactly the same as a slave server, with the 
> caveat that if the client fails to contact a local slave, you could 
> opt for it to attempt to contact the master as well.
 
All you've done is reimplement NIS - poorly.  It already does everything
you describe, and pretty well too.  I can add new automounter maps, create
accounts, and they get pushed out for me.

When I first started working with NIS, we had a Sparc IPX
running on a 10-Base2 network (aka coax).  That thing would fall over
and kill the network if you looked at it.

Fast forward to today.  I've got an SGI box on a switched 100-BaseT network
serving three times as many machines without a burp.  The only reason
I'm replaing it with a Linux server is (a) it's IRIX and (b) it's not
rackable - the box is on the floor.

The only problem I run into is a ~5 second delay for ypbind to figure out
what's going on.  But that's probably the same amount of time as used by
rsync to figure out what files need to be pushed.

Nor would your solution solve the most serious issue with NIS: the
"if you have root on one box, you can take over anyone's account"
vulnerability.

> >>>>> On Wed, 18 Jun 2003, "mike" == mike ledoux wrote:
> 
>   mike> how do you handle password changes?  I can't think of any
>   mike> reasonable way to deal with that short of requiring people to
>   mike> log in to the 'master' server to make changes, and that seems
>   mike> like a very fragile solution to me (I can easily imagine my
>   mike> users 'forgetting' and changing their password on the local
>   mike> copy of the file, and not being able to log in tomorrow).
>   mike> Maybe I'm just missing something obvious.
> 
> Well, I think this could be handled pretty easily in a couple of 
> different ways.  The first, and rather simplistic method is via a Web 
> interface where the user changes their password, shell, whatever.  
> Upon submission and validation of the form, an rdist is kicked off to 
> the slaves and clients.
 
Already implemented in yppasswd

>  The intersting thing about this idea is that with a combination of
> good planning, a good client replication/auto-build system such as
> FAI, System Imager, or maybe even KickStart, you could actually have a
> web server running on each client system.  This would first make
> changes to the local files, then rsync the changes back up to the
> local subnet slave.  The slave could then both immediately update it's
> subnet and contact the master, which would kick off an rdist/rsync to
> all slaves except the one from which it received the latest updates.
 
Sounds like a lot of logic behind it.

> There are some obvious concerns with this method, namely, do you want 
> to trust propagating changes to an entire network which were received 
> from a possibly compromised host.  However, in certain environments, 
> I can see where this risk is really no worse than running NIS in the 
> first place, since an NIS server is rather easy to compromise even 
> without physical access to it.
 
Err?

> Another more interesting idea would be a network based revision
> control system with post-commit hooks to distribute the changes..  For
> instance, assume you have a master server which keeps all these files
> under revision control.  The clients at boot time could simply attempt
> to update their working copy of these files, and failing that, check
> out the repository.
> 
> When any of the files is changed, the client would first check in the 
> working copy, which on the server would trigger a post-commit hook 
> which would turn around and kick off some command issued to all 
> clients to update their working copies.
> 
> I don't have this idea completely fleshed out yet, and there are some
> obvious flaws and/or requirements for making this all transparent to
> the end user to which I have not given a lot of thought to yet.
> However, using Subversion as a revision control system makes this
> scenario incredibly plausible IMO.
 
In a previous life, we kept all NIS files in CVS.  Made cleaning up
messes a lot easier, especially when we had three admins working on
it at once.

-Mark
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://mail.gnhlug.org/mailman/private/gnhlug-discuss/attachments/20030618/29e0c551/attachment.bin


More information about the gnhlug-discuss mailing list