big endian vs. little endian, why 1959
big endian vs. little endian, why 1960
When I said "alien" it was my way of referring to an integer format which is not native to the CPU you're running...
You need to look at the hardware implementations. Back when transistors were much more expensive, a lot of machines used only byte-wide data paths in the CPU. Arithmetic was done byte-serial. If data is fetched little-endian and a byte at a time from memory, byte-serial addition is easy to do, and the ALU stores the carry bit between bytes. It makes for a clean and efficient architecture.
"More intuitive" is a human thing, it only matters if you're trying to read core dumps. I've read plenty of such dumps in both big and little endian format, I don't recall it ever causing cognitive dissonance. I suspect that would be true for most people working at that low level.
An interesting issue for big-little endian relates to people who abuse C pointers. For example, using a (short *) pointer to read from a long variable. This sorta works for little-endian machines, doesn't work at all sensibly for big endian. I have heard the argument that this makes little-endian representation superior, but that only works if you believe that such coding styles are a good idea.
I suppose this trick might allow a *compiler* to generate more efficient code in a few cases, which would be a legitimate advantage. -- Jonathan Griffitts AnyWare Engineering Boulder, CO, USA