IBM 610 workstation computer 3403
IBM 610 workstation computer 3405
lots of cache implementations have significant (direct) cross-cache chatter about what cache lines they have (cache coherency protocols). multi-level L1, L2, L3 cache architectures were discussed from...
there is what i call the dup-no-dup issue ... with multi-level "cache" architectures ... typically is aggrevated when two adjacent levels are of similar sizes.
IBM 610 workstation computer 3410
No CPU in a multi-CPU system can wait. Consider the case where the "owning...
i guess i ran into it originally in the 70s with three levels of storage hierarchy; main memory, 2305 paging fixed-head disk, and 3330 disk paging. i had done page migration ... moving inactive pages from 2305 (fast) paging to 3330 disk paging.
the issue was that as main storage sizes were growing ... they were starting to be compareable in size to available 2305 page capacity; say 16mbyte real memory and a couple 12-mbyte 2305s.
IBM 610 workstation computer 3404
Snooping is a completely different kettle. Heisenburg's Principle applies here. Even on machines built in simpler times, the snooper had to be very careful that the snooping didn't affect the measurements he was collecting. You...
up until then, the strategy was that when fetching a page from 2305 to main storage ... the copy on the 2305 remained allocated. this was to optimize the case if the page in real storage needing replacement ... and hadn't been modified during the most recent stay in memory ... then it could simply be discarded w-o having to write it out (the "duplicate" copy on the 2305 was still valid). the problem was that the 2305s could be nearly completely occupied with pages that were also in real storage. when 2305 capacity was being stressed ... it was possible to get better thruput by going to a "no-dup" stragegy; when a page was read from 2305 into memory, the copy on 2305 was deallocated and made available. this always cost a write when page was selected for replacement ... but the total number of "high-performance" pages might be doubled (the total number of distinct pages either in memory or on 2305).
this showed up later in the early 80s with the introduction of the ironwood-3880-11 disk controller page cache. it was only 8mbytes. you might have a system with 32mbytes of real storage and possibly four 3880-11 controllers (each w-8mbytes for 32mbytes total).
you could either do 1) a normal page read ... in which case the page was both in memory and in the disk controller cache or 2) a "destructive" read, in which case, any cache copy was discarded. when you replaced and wrote out a page, it went to cache.
the "dup" strategy issue was that the 32mbytes of disk controller cache was occupied a pretty much duplicates of pages in real storage ... and therefor there would never be a case to fetch them (since any request for a page would be satisfied by the copy in real memory). it was only when a non-dup strategy was used (destructive reads) that you would have a high percentage of pages in the disk controller that were also not in real memory... and therefore might represent something useful ... since only when there was call for a page not in real storage would there be a page i-o request that could go thru the cache controller. If the majority of the disk controller cache was "polluted" with pages also in real storage (duplicates) ... then when there was a page read .. there would be very small probability that the requested page was in the disk cache (since there were relatively few pages in the cache that weren't already in real storage).
IBM 610 workstation computer 3409
The thing that really gets to the electronics is the presence of three things : * Exposed, metallic surface (like a pc) * Corrosive agent (like salt water) * Electric potential. When those are present you will have...
something similar came up a few years ago with some linux configurations ... machines with real memory that were 512mbytes, 1gbyte, and greater than 1gbyte ... with 9gbyte disk drives. if you allocated a 1gbyte paging area, a 1gbyte real storage ... and a dup strategy ... then the max. total virtual pages would be around 1gbyte (modulo lazy allocation strategy for virtual pages). however, a no-dup strategy would support up to 2gbyte of virtual pages (1 gbyte on disk and 1gbyte in memory ... and no duplicates between what was in memory and what was on disk). The trade-off was that no-dup strategry required more writes (every page replacement whether the page had been altered or not) while potentially allowing significantly more total virtual pages.
.... and for another cache drift ... i had done the original distributed lock manager (DLM) for hacmp product
it supported distributed-cluster operation and supported semantics similar to the vax-cluster dlm (in fact, some of the easiest ports to ha-cmp were dbms systems that had developed support for running on vax-cluster). one of the first such dbms (adopting their vax-cluster operation to ha-cmp) was ingres (which subsequently went thru a number of corporate owners and relatively recently was being spun-off as open source). in fact, some of the ingres people contributed their list of top things that vax-cluster could have done better (in hindsight) that i was able to use in ha-cmp dlm.
IBM 610 workstation computer 3406
On 21 Feb 06 10:20:29 -0800 in alt.folklore.computers, "Charlie Gibbs" Bad programmers can write spaghetti code in any language. Thus the total lack of any overall improvement in software...
now some number of these database systems have their own main memory cache and do things that they may call "fast commit" and "lazy writes" ... i.e. as soon as the modified records are written to the log, they are considered commited ... even tho the modified records remain in real storage cache and haven't been written to their (home) database record position. accesses to the records would always get the real-storage, modified, cached version. in the event of a failure, the recovery code would read modified, commited changes from the log, read the original records from the dbms, apply the changes and write out the updated records.
so in distributed mode, dbms processes needed to obtain the specific record lock from the DLM. I had worked out a scheme where if their was an buttociated record in some other processor's cache, i would transmit the lock granting and the modified record (direct cache-to-cache transfer) avoiding a disk i-o. at the time, the dbms vendors were very sceptical. the problem wasn't the transfer but the recovery. they had adopted an implementation that in a distributed environment when a lock-record moved to a different processor (cache), rather than directly transmitting it, it would be first be forced to its database (home) record position on disk ... and the dbms on an another processor would then retrieve it from disk (this is somewhat akin to force it from cpu cache to ram before it could be loaded into another cpu cache). the problem was that there were all sorts of recvoery scenarios with distributed logs and distributed caches ... if multiple versions of "fast commit" records were laying around in different distributed logs. the issue during recovery was determining which was of possible multiple record changes in different logs. Potentially none of these modifications might appear in the physical database record ... aka what was the recovery application order of modifications from different logs for the same record.
at the time, it was deamed to complex a problem to deal with ... so went with the safer ... only have at most one outstanding "fast commit" modification and anytime a record moved from one distributed dbms cache to another dbms cache ... first force it to disk. however, in the past couple years, i've had some vendor people come back and say that they were now interested in doing such an implementation.
some specific posts mentioning dlm: