Exceptions at basic block boundaries 544
360-67 and 370 had hirrarchical tables ... slightly different ... enuf to make virtual machine implementations of the different tables something of a pain.
370 architecture had Segment table with segment table entries that pointed to page tables. architecture provided option support for selection of either 2k or 4k pages and either 64k segments (16 pages) or 1mbyte segments (256 pages). base 370 architecture allowed for implementation of either "STO" buttociative TLB (i.e. the physical address of the segment table origin was used to tag entries in the TLB, effectively address space buttociative) or "PTO" buttociative TLB (i.e. the physical address of the page table origina was used to tag the entries in the TLB). In STO-buttociative, shared pages belonging to different same shared segments would have different entries in the TLB. In PTO=buttociative implementation, there would be a single TLB entry for shared segments.
I believe all 370s implemented STO-buttociative ... even tho the architecture allowed for PTO-buttociative.
low-end & mid-range 370s tended to have small TLB that had all entries flushed every time address space (segment table pointer) was changed ... so there was no concept of having concurrent entries for page table mappings for different page tables.
Crash detection by OS
Del Cecchi processor crash for ha-cmp besides distributed lock manager, we had to do cluster membership Debt Management ... removal of member on...
370 165-168 had 128 entry TLB and a 7-entry "STO-stack" ... and each TLB entry had a 3-bit identifier that corresponded with an entry in the STO-stack. Changing address space (loading new STO value in CR1) would check to see if the STO value was already in the STO-stack ... if so, it just srarted using that value. If not, one of the entries in the STO-stack would be selected for replacement ... and all of the buttociated TLB-entries would be flushed.
On of the problems with the introduction of the 168-3 ... was they doubled the cache from 32kbytes to 64kbytes ... and used the 2k bit as one of the address bits for indexing cache entries. When in 2k virtual page mode ... the 168-3 ran with only 32k cache active (only using full 64k cache when in virtual 4k page mode). There was a customer other stuff. VS1 was a 2k virtual page operating system ... and vm-370 had to provide its virtual machine operation with 2k virtual page shadow page tables. The customer upgraded from 168-1 to 168-3 doubling machine cache ... and thruput got much worse. In theory running with half 168-3 cache should have been no worse than 168-1 ... however, the mapping process changed.
801-risc defined inverted page tables
i've made the buttertion that 801-risc was in some sense a reaction to the failure of FS
Hi, We have frequently discussed outsourcing here, and I thought you might be interested in the following which appeared in a mailing list I subscribe to. I don't...
with the swing to the complete opposite end of hardware complexity ... instead of maximum hardware complexity ... have maximum hardware complexity ... and move a lot of issues into software, pushing the envelope on compiler (pl.8) and operating system (CPr) technology.
Exceptions at basic block boundaries 545
360-67 & 370 used shared segments for sharing of pages ... i.e. implicit only one PTE per page frame (multiple different address spaces or segment tables ... and pointers to the same...
There are a number of places in the world where the rich and poor world meet at some...
in the late 70s ... there was a push to migrate the scores of different internal corporate microprocessors to an 801-risc architecture. for instance all of the low-end and mid-range 360s and 370s had been a variety of microprocessor engines that were code to emulate 360-370 (avg. about ten native microprocessor instructions per 360-370 instruction). the follow-on to the 4341 mid-range processor was nominally going to be a native mode 801-risc processor. the issue was that silicon state of the art was getting to the point where you could just about do the full 370 in silicon chip ... rather than having to emulate it. i helped write the drafts that argued having the 4341 followon be a native silicon implementation (with much better price-performance) than a native 801-risc with 370 "microcode" emulation (at a 10:1 penalty).
in the early 80s, research had a joint project with office products division to use an 801-risc chip; ROMP with CPr (implemented in PL.8) as a displaywriter followon. one of the CPr (pl.8) issues was loads of stuff had been moved from hardware to software ... there were inverted page tables, no hardware protection domains ... any inline software could execute every instruction and access every machine facility. Things like protection and security were issues provided and supported by correct compiler generated code and correct operating system loading.
Instead of segment and page tables, there were 16 segment registers that could be loaded with 12bit "segment" identifier. The top four bits of a 32bit virtual address was used to select a segment register which pulled a 12bit segment identifier. The 12bit segment identifier plus the remaining 28bit virtual address was used to index TLB entry. In 370 terms, it was a PTO-buttociated TLB with 256mbyte segments ... because each TLB entry was buttociated with a specific segment value (as opposed to a virtual address space value). Write-ups even referred to the ROMP chip having 40bit virtual addressing ... from TLBs being tagged with the 12bit segment buttociative value (combined with the 28bit virtual segment displacement value). In theory, inline application code would dynamical change segment register buttociative values ... as easily as more familiar code could change address pointers in general purpose registers.
The displaywriter project was end, and the group decided on retargeting the platform to unix workstation market (PC-RT). One of the things that had to be added back in was hardware support for protection domains. The company that had been hired to do the AT&T unix port for ibm-pc (pc-ix) was hired to do a similar port for "AIX". However, you had all these pl.8 programmers ... so instead of having the AIX port done to the bare hardware ... an abstract virtual machine was defined for the AIX port ... and the pl.8 programmers were put to work implementing the VRM ... which talked to the real hardware and provided the abstract virtual machine interface. There was some claim that this could be done faster and easier than having the outside company doing an ATT unix port to the native hardware.
The counter was when the palo alto science center did a BSD port to the native ROMP hardware with less resources than was used in either the ATT "AIX" effort to VRM and-or the VRM effort.
The follow-on chipset for ROMP was RIOS-power. it still had inverted tables, 16 segment tables registers ... but they had doubled the segment-buttociative number of bits from 12bits to 24bits ... as a result, you may trip across early descriptions of power referring to its 52bit virtual address capability.
a couple recent posts that talk about the transition from free software to priced-licensed software ... and some impact of FS on the transition.