IBM 610 workstation computer 3400
Whether software or pure hardware I do not care.
The buses are now the speed bottleneck. Most instructions take place in a single clock cycle. Reads and writes to main memory take 50 to 100 clock cycles and it is getting worse. Several CPUs on the same chip probably make the delays bigger since they share the same bus. Very complex systems, like cache memory, are used to speed this up.
When a write is made to a location the data and address are stored in cache memory. When the bus is free this data is copied down to main memory. This can happen to the same location several times a second. On systems like PCs updated blocks of ram are also copied to the virtual storage area of the disk, every 5 seconds or so.
Memory usage is not flat, certain locations get updated more than others. Examples of very high usage are loop variables and call-return stacks. Initial cache memories were about 64 bytes now they can be megabytes. At only 64 bytes dispersed randomly main memory needed to be updated or the system got confused by interrupts and process changes.
IBM 610 workstation computer 3401
Ok, then I stand by my buttertion that main memory (particularly memory virtualized-paged with an MMU...
IBM 610 workstation computer 3402
KR Williams snip Now we can have a sensible debate. The lack of fast memory was the reason virtual storage systems were invented. Each virtual storage...
IBM 610 workstation computer 3403
there is what i call the dup-no-dup issue ... with multi-level "cache" architectures ... typically is aggrevated when two adjacent levels are of similar sizes. i guess i ran into it originally in...
I am wondering if we can remove the intermediate transfers, or reduce the number of bus writes by one or two orders of magnitude. The computer's architecture is changed such that the main memory is on chip, consequently off chip storage does not need updating.
Since software will always need more memory than is available on chip it is worth keeping the write to disk of modified memory every 5 seconds or so.
Currently reads of data and instructions from ram need to wait whilst the bus is used to update ram. Reduce the writes and this process automatically gets faster. The caching of instructions can be revised since the entire instruction block would be on chip.
A side effect of bringing main memory on chip is that the same physical area can hold 3 to 4 times as many ram locations as cache locations since there are no tags. The say 512 byte blocks would still need tags to permit the virtual memory system to monitor usage and modifications. Virtual memory systems can use their extra processing power to only empty say 10% of the on chip memory, keeping the other 90%, when the "cache line" is full.