virtual memory 4497
I never had that ... dating back to my original implementations in the 60s.
working set size was mechanism for limiting over commit of real storage and resulting page thrashing ... pathelogical case was that you needed a page that wasn't in real storage, had a page fault, waiting for the page was brought in and by the time that page was ready ... other pages that you required had been stolen; worst case was no progress got made (since all tasks were constantly stealing pages from each other).
virtual memory 4502
re: part of the instruction storage ref-change trace and paging simulation was also the program reoganization work...
working set size was mechanism for managing-limiting concurrent compebreastion for available real storage.
global lru replacement was mechanism for making sure that the optimal set of all pages were resident and ready for use in real storage.
as real storage got more plentiful ... starting by at least the later part of the 70s ... there were other techniques used to manage amount of paging. one strategy that had been developed in the late 70s ... was tracking per task pagewait time (i.e. bottleneck had shifted from constrained real storage to constrained disk i-o ... from the 60s there was inter-relationship between constrained real storage and the amount of paging involving disk i-o, however, moving into the late 70s ... requests could start queueing against disk, even with relatively little real storage contention).
virtual memory 4501
one of the things that some of the paging simulation work done at the science center looked at the trade-offs of different...
part of the issue was the characteristic somewhat written up as "strong" and "weak" working sets ... i.e. the set of actual pages required were strongly consistent over a period ... versus somewhat changing. the set "size" over time could be the same in both ... but once "strong" working set had been established, the task would run with no additional paging operation. in weak working set, once the initial set had been established ... there would continue to be paging over time (the size of the set didn't change, but the members of the set changed over time).
to try and handle this situation ... there were implementations that modified global lru replacement strategy based on task progress (or its inverse, amount of pagewait); a task making little progress (higher pagewait) ... would have some percentage of its pages selected by global LRU ... skipped i.e. global LRU would select pages from the task, but because the task was making less progress than its target, some percentage of its selected pages would be retained ... global LRU would then skip past the selected page and search for another. the percentage of pages initially selected by global LRU replacement strategy ... but then skipped ... would be related to how much progress the task was making vis-a-vis the objective. this tended to lower the pagewait time for tasks that had somewhat weak working sets that were getting behind schedule.
the issue here was that available real storage tended to be much larger than the aggregate of the concurrent working set sizes ... so use of aggregate working set size to limit the number of contending tasks as mechanism for limiting paging was ineffective. global LRU still provided that the optimal set of global pages were resident. however, tasks with strong working sets might have advantage over tasks with weaker working sets. as a result, a mechanism was provided to reduce the rate of pages stolen for those tasks (typically from tasks with weaker working sets) that were getting behind schedule in their targeted resource consumption (because of time spent in pagewait).
virtual memory 4503
On Thu, 01 Jun 2006 17:07:33 -0400 in alt.folklore.computers, Bill The comment was meant to be general about...
for the other scenario ... for tasks with extremely weak working set ... and even all avaiable real storage provided them little or no additional benefit ... there was a strategy for "self-stealing" ... aka a form of local LRU ... but that was only for limiting the effects of extremely pathelogical behavior ... it wasn't used for task operating with normal operational characteristics.
something similar was found in the detailed analysis of disk record caching designs.
given detailed tracing of all disk record access across a wide range of different live operational environments ... for a given fixed amount of total electronic memory ... and all other things being equal ... using the memory for a single global system cache provided better thruput than dividing-parbreastioning the memory into sub-components.
this was the case of instead of using all the fixed amount of electronic memory for a single global system cache, the available fixed amount of electronic memory would be divided up and used in either controller-level caches (as in the reference to 3880-11 and 3880-13 controller caches) and-or disk-level caches.
there were two caveats; some amount of electronic store was useful at the disk head level for staging full track of data as rotational delay mitigation and you needed strategy to minimize caching of purely sequential i-o ... where caching provided no benefit; i.e. for sequential i-o ... it basically violated basic buttumption behind least recently used replacement strategies ... that records-pages used recently would be used in the near future. for that behavior, a "most recently used" replacement strategy would be more ideal ... which can also be expressed as "self-stealing" ... new request replaces previous request by same task (trying to avoid a purely sequential i-o transfer from wiping out all other existing cache entries).
virtual memory 4498
Bill Todd I found such studies, which appear to show that VMS's mechanism is within, by looking at...
and in its analogy back to paging as a form of cache and its relationship to least recently used replacement strategy ... past a certain point, extremely weak working sets ... can appear to be very little different from sequential reading.