Dev Ops - architecture (local not cloud)
Bruce Dawson
jbd at codemeta.com
Fri Dec 6 17:36:06 EST 2013
A long time ago in a distant quantum universe, there was at least one
company that did it like this:
* Everyone had a "home directory" on the share (at the time, NFS V3 -
beta!)
* Everyone had a local disk on their workstation.
* All utilities (compilers, OS utilities, ...) were on a "ghost" drive
that was effectively read-only to most developers, and was
(sometimes) updated nightly with the latest patches from the OS vendors.
* The source control system (at the time, RCS!) was on a separate
system, and the ci/co commands transferred files to that system.
* If people wanted stuff backed up, they put it on their home
directory on the share.
* If people wanted speed, they used their local disk. This was usually
done for "unit builds" and unit testing.
* "Assembly builds" and all other builds were done by release
engineering, and they pulled from the (at the time) RCS repository.
If your stuff didn't get into the repository, it didn't get built.
(And you didn't get the automatic bug report mailings).
--Bruce
On 12/06/2013 08:35 AM, Greg Rundlett (freephile) wrote:
...
What does your (software development) IT infrastructure look like?
...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.gnhlug.org/mailman/private/gnhlug-discuss/attachments/20131206/de7d80b4/attachment.html
More information about the gnhlug-discuss
mailing list