Speed is usually the item one trades off when security and-or data integrity is a higher priority. I had no intention of implying that doing it by hand without a hardware buttist would be at all efficient. That's exactly why hardware types would get beat up until they provided the function. Memory protection in hardware isn't cheap and isn't simple. You some extra wires to wrap for that (I'm talking out of my butt here because I have no idea how it gets done in hardware). It is a tradeoff. If you have a powerful software group who does talk to the hardware group, the tradeoff is a blink of an eye before dismissing the option of dropping memory protection as a cost saver.
The user throuput turnaround time would be a tad faster than using a card deck, yes. But you wouldn't have to reinstall the whole damned system. While software systems are getting rebuilt, they can't be used for real work. So the bottom line of costs would ship those jobs off to a capable system. The one true law about computing is that it has to stay up to deliver any computing services.
In this case, it is "faster" and more efficient to ship the job off to somewhere else. You have count everything to decide what is "fast".
Sure. That's why I would use MS-DOS as an example of what not to do. I bet this could use up three semesters.
Thats still not really helping. It does prevent people from easily writing software for the OS, but it does exactly nothing to prevent bugs in the compiler or emulator or...
It's a subjective measure because it depends on who used the new architecture.