What ever happened to Tandem and NonStop OS 2071
part of it was that the valley was the hot bed of all sort of activities ... some topic drift:
and the other part was the company was so large ... that over a period of years you were exposed to large number of different things.
FS was divided into something like 13 sections ... my wife worked for the head of the section that had to do with advanced interconnect
she then did stints co-authoring AWP39 (peer-to-peer networking architecture ... in early SNA time-frame ... as an alternative to SNA's centralized terminal control oriented architecture) and the JES group ... involved in loosely-coupled (cluster by any other name) operation.
she then got con'ed into going to POK to be in charge of loosely-coupled architecture where she did peer-coupled shared data architecutre
somewhat after the incident mentioned in the reference, she was co-author and presented corporate response to fed. gov. request for large, secure, campus-like distributed envrionment ... where she formulated many of the foundation of 3-tier architecture. we then extended that and were making customer executive presentations ... recent reference
which as mentioned ... didn't earn us any points with the SAA, SNA and-or T-R folks
Prior to the referenced incident ... I had been involved in doing a custom SMP kernel for HONE ... which was provided time-sharing services to world-wide sales, marketing, and field support.
the US HONE datacenter had been consolidated in cal, (with numerous clones around the world) ... and eventually grew into possibly the largest single-system image operation in the world. Then because of (earthquake) disaster concerns ... there was a replica built first in Dallas and then a second replica in Boulder (with load balancing and fall-over across the three complexes).
With the proliferation of mid-range computers ... in the same time and market segment as vaxes ... in the early 80s, you started seeing the regions and then the larger branch offices installing their own 148s and 4341s. The US HONE datacenter (and replicas) with all of its world-wide clones interconnected with the internal network
and then the growth of the regional and branch machines ... again all on the internal network ... was an amazing operation in the early 80s ... not to mention the internal network itself ... which was larger than the whole arpanet-internet from just about the beginning until sometime mid-85.
somewhat concurrent with trying to do a new, portable operating system, I had also started the high-speed data transport activity
What ever happened to Tandem and NonStop OS 2078
IBM and DEC equally funded MIT Project Athena for $25m each ... it had stuff like X and Kerberos (kerberos widely used authentication...
that included developing an operational high-speed backbone ... but we weren't allowed to bid on nsfnet rfp ... minor ref:
i was also doing some database stuff ... having done some of the stuff for system-r(original relational-sql)
but was also doing some support for vsli tools group out in los gatos. they were doing a different kind of relational database that had less structure than what has come to be RDBMS ... and looking at its use in chip design ... including being able to integrate logical and physical design. they were also working on first 32bit 801 risc (there was already romp, which was 16bit 801 risc ... which was eventually used in the pc-rt) ... and there was lots of discussion about building a more sophisticated operating system than CPr
so part of the issue was reviving some of the stuff that we had been doing for acorn, porting unix, and-or trying to better leverage lots of the mainframe experience ... but apply it to a portable implementation ... this was the heyday of lots of portable unix workstation work going on in the valley ... as well as the stuff like gold and aspen going on at amdahl ... recent gold-aspen reference
the other issue was that i had recently done a new kernel debugging tool that was extensively deployed internally
... and I was starting to develop failure pattern stuff ... and clbuttifying some general failure causes. for VM-370 implemented in 370 buttembler ... there was a characteristic of 370 buttembler that was a common cause of a large percentage of failures ... aka programmer having to provide Debt Management of hardware register values ... and unusual code paths not correctly establishing register values. I had written another program that analysed 370 buttembler source and explicitly attempted to reconsrutct all possible code paths and look for register used w-o loaded cases (which still didn't catch the cases of not reloaded with expected values).
What ever happened to Tandem and NonStop OS 2072
That was the f***ing problem!!!! Sorry. In addition, another problem was those that were caused by the decision of which grandfather sources one used. I am not kidding...
Doing a new operating system ... was an opportunity to address a whole plethora of issues (including portability and some number of common-frequent failure characteristics).
some of the distributed, interconnect as well as database experience came in handy when we were doing the ha-cmp product
minor related post
and for even more drift