Internet history (was: We need a better Internet)
Tom Buskey
tom at buskey.name
Thu Apr 8 11:17:18 EDT 2010
On Thu, Apr 8, 2010 at 10:42 AM, Kevin D. Clark
<kevin_d_clark at comcast.net>wrote:
>
> Benjamin Scott writes:
>
> > I remember a shared login script at UNH which defined various names,
> > so you could do things like:
> >
> > for H in $DWARVES ; do ... ; done
> >
> > Makes sense to put something like that in /etc/profile or whatever,
> > if you're doing to use the fancy name strategy. :)
>
>
>
So you basically need a lookup table of some sort.
I have an environment that encodes site, server/workstation, base OS/VM in
the name.
cat hosts | egrep iw # excludes the servers
cat hosts | egrep is[0-9][0-9]a gets just the base OS.
I've had places that encode the OS (SunOS, Solaris, Linux, OSF, HP-UX,
Windows, etc) into the name, cpu relative power, server/workstation/compute
node, and location all into 8 characters that can be scripted and/or
explained to end users.
I've also used NIS netgroups in combo with the names but they didn't go as
fine grained. It was useful for a 500 system network with 20+ sysadmins in
1995 Today you might use a more sophisticated database. Or just use
something like $DWARVES. The naming scheme means a DB isn't needed if you
understand the coding.
The 500 node site had some areas with <20 system and they used Loony Tunes.
Big characters were servers. Foghorn is obviously bigger then Bugs, right?
Physically or Stardom? Another had Star Trek names. I could never get
around that because I don't care about Star Trek & don't care to learn
character relationships. Then they came up with the naming scheme that
could be explained to everyone.
The coded names scale up pretty well.
This is a good idea, and I ended up doing this too.
>
> The problem that I had was that I frequently had to deal with the
> situation of "this particular problem only really efficiently runs on
> 1, 4, or 16 nodes in the cluster" or "this problem only really
> efficiently runs on 1, 2, 4, 8, or 16" nodes in the cluster"....now,
> what nodes were these again, and how do I relate all of the logfiles
> that I obtained from the last program run?
>
>
You might have proven my point. Or you have a grid of Solaris sparc,
Solaris x86, Linux (debian, red hat), HP-UX. Some code is shell/perl/python
& runs everywhere. Some is compiled only for HP-UX. Or RedHat has this
tool needed but Debian doesn't.
Luckily, few people have to deal with heterogeneous Unix networks with more
then 1 of several types nowadays.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.gnhlug.org/mailman/private/gnhlug-discuss/attachments/20100408/357b6483/attachment-0001.html
More information about the gnhlug-discuss
mailing list