IBM 610 workstation computer 3401
IBM 610 workstation computer 3403
there is what i call the dup-no-dup issue ... with multi-level "cache" architectures ... typically is aggrevated when two adjacent levels are of similar sizes. i guess i ran into it originally...
Ok, then I stand by my buttertion that main memory (particularly memory virtualized-paged with an MMU) is no more than a cache for disk, much the same as the L2 cache(s) is(are) for memory.
...have been for decades.
cycle but the latency is quite a bit higher.
Reads-writes to-from caches are a few to tens of clocks cycles. Point?
Caches aren't all that complex, at least my today's standards. This stuff is pretty well known now.
You buttume a write-through cache. Not a good buttumption. Write- back caches only write data to memory on a cast-out or perhaps a snoop. The data doesn't get written to memory unless it must.
I've missed your point. If you're saying it's a matter of milliseconds to seconds, yawn. Of course. That's why we have L1s and L2s, and any number of other Ls.
Ok. We don't have 64 bytes. We have megabytes, as you point out.
Again, you needn't write to main memory unless the cache is full or *maybe* if another processor requires data from a line that's "dirty". These restrictions still hold in your "integrated main memory" proposal (at least as I understand it). The fact is that you really can't put all your memory on chip, so must swap it to the next level in the hierarchy. You've already said that you don't much care if this is managed by hardware or software. Thus I don't see how you've changed the solution.
If you say so. I'm not sure what this has to do with your proposed solution though.
IBM 610 workstation computer 3402
KR Williams snip Now we can have a sensible debate. The lack of fast memory was the reason virtual storage systems were invented. Each virtual storage block has a pointer is...
Only if your data-instruction doesn't already reside in the processor's cache.
Only if you have *all* your memory on chip.
No, the tags are not 3-4 times the area of the cache array. It's rather the other way by a few orders of magnitude. For each cache line (maybe 256 bytes) only a subset of the address is stored with a few status bits.