Google Architecture 3744
Google Architecture 3748
Anne & Lynn Wheeler a little drift back to ibm: from above: Safeway and its technology partner IBM were involved in the first ‘Chip and Pin’ trials held in the...
and the difference between that and loosely-coupled or parallel sysplex?
long ago and far away, my wife was con'ed to going to POK to be in charge of loosely-coupled architecture ... she was in the same organization with the guy in charge of tightly-coupled architecture. while she had come up with peer-coupled shared data architecture
it was tough slogging because all the attention was focused on tightly-coupled architecture at the time. also she had battles with the sna forces ... who wanted control of all communication that left the processor complex (i.e. outside of direct disk i-o, etc).
part of the problem was that in the early days of SNA ... she had co-authored a "peer-to-peer" network architecture with Bert Moldow ... AWP39 (somewhat viewed in compebreastion with sna). while SNA was tailored for centralized control of a large number of dumb terminals ... it was decidedly lacking in doing peer-to-peer operations with large numbers of intelligent peers.
a trivial example was sjr had done cluster 4341 implementation used highly optimized peer-to-peer protocols running over a slightly modified trotter-3088 (i.e. eventually came out as conventional ctca ... but with interconnection for eight processors-channels). peer-to-peer, asynchronous could achieve cluster synchronization in under a second elapsed time (for eight processors). doing the same thing with SNA increased the elapsed time to approx. a minute. the group was forced to only release the SNA-based implementation to customers ... which obviously had severe scaling properties as the numbers in a cluster increased.
the communication division did help with significant uptake of PCs in the commercial environment. a customer could replace a dumb 327x with a PC for approx. the same price, get datacenter terminal emulation connectivity and in the same desktop footprint also have some local computing capability. as a result, you also found the communication group with a large install base of products in the terminal emulation market segment (with tens of millions of emulated dumb terminals)
in the late 80s, we had come up with 3-tier architecture (as an extension to 2-tier, client-server) and were out pitching it to customer executives. however, the communication group had come up with SAA which was oriented trying to stem the tide moving to peer-to-peer networking, client-server, and away from dumb terminals. as a result, we tended to take a lot of heat from the SAA forces.
in the same time frame, a senior engineer from the disk group in san jose managed to sneek a talk into the internal, annual world-wide communication conference. he began his talk with the statement that the communication group was going to be responsible for the dissolution of the disk division. basically the disk division had been coming up with all sorts of high-thruput, peer-to-peer network capability for PCs and workstations to access the datacenter mainframe disk farms. the communication was constantly opposing the efforts, protecting the installed base of terminal emulation products. recent reference to that talk:
The Power of the NORC
One of the web pages about the Naval Ordinance Research Computer constructed by IBM for use at the Naval Weapons Proving Ground in Dahlgren, Virginia claims that its performance was unsurpbutted until the Control...
i had started the high-speed data transport project in the early 80s ... hsdt
Google Architecture 3746
re: we took some amount of heat in the 80s from the communication group working on high-speed data...
and had a number of T1 (1.5mbit) and higher speed links for various high-speed backbone applications. one friday, somebody in the communication group started an internal discussion on high-speed communication with some definitions ... recent posting referencing this
medium-speed 19.2kbits high-speed 56kbits very high-speed 1.5mbits
the following monday, i was in the far-east talking about purchasing some hardware and they had the following definitions on their conference room wall
medium-speed 100mbits high-speed 200-300mbits
part of this was the communication division 37xx product line only supported up to 56kbit links. They had recently done a study to determine if T1 support was required ... which concluded that in 8-10 years there would only be 200 mainframe customers requiring T1 communication support. The issue could have been that the people doing the study were suppose to come up with the results supporting the current product line ... or maybe they didn't understand the evolving communication market segment, or possibly both.
their methodology was to look at customers using 37xx "fat pipes" ... basically being able to operate multiple parallel 56kbit links as a simulated single connection. They found several customers with two parallel links, some with three parallel links, a few with four parallel links and none with higher number. Based on that, they projected that it would take nearly a decade before there were any number of customers with parallel links approaching T1 (1.5.mbits) capacity.
the problem with the analysis at the time was that the telcos were tariffing T1 at approx. the same as five 56kbit links. customers going to more than four 56kbit links were buying full T1 and operating them with hardware from other vendors. A trivial two-week survey turned up 200 mainframe customers with full T1 operations ... something that the communication group was projecting wouldn't occur for nearly another decade.
Google Architecture 3745
so the issue is effectively how fast fault isolation-recovery-tolerant technology becomes commodized. this is...
so last fall, i was at a conference and there was a talk about "google as a supercomputer". the basic thesis was that google was managing to aggregate large Debt Collection of processing power and data storage well into the supercomputer range ... and doing it for 1-3rd the cost of the next closest implementation (in terms of cost).
slightly related old post
from when we were working on scaling for our ha-cmp product