How much RAM is 64K 36bit words of Core Memory 785
'Paul' "I get confused with the old core memory descriptions of memory being WORDS rather than bytes. For example: 64K (36-bit) Words of Core Memory How much RAM is this in bytes?"
The term 'core memory' implies arrays of tiny ferrite doughnuts wired in a frame, each of which is one bit-plane. The frames were built by hand, with three wires running through each doughnut. Such memory was large, power hungry, very slow, and VERY expensive ( ~ $1000 US per month per 1000 bits; at least $6000 US in today's dollars. Computers during this epoch tended to have very complex instructions requiring many bits per instruction (some used long words ( e.g. 72 bits) and used one word per instruction. Some used smaller units of memory (e.g. 6, 7, 8 bits; characters) and used multiple characters per instruction; some machines used a fixed number of characters per instruction, some a variable number of characters. Some processors determined instruction length by the op code; some by a flag, some by a special bit in each character (e.g. word mark seventh bit for the IBM 1401, Honeywell 200 series.)
There are many formats for data:
IBM used BCD encoding for decimal numbers and packed two digits in each 8 bit unit, with one 4 bit section reserved for sign
processors that used 6-bit characters handled text as a character set of only 64 entries - no capital letters
several formats existed for floating point numbers
How much RAM is 64K 36bit words of Core Memory 789
Anne & Lynn Wheeler' wrote, in part: "original 360 were 30, 40, 50, 60, 62, & 70. 60, 62 & 70 were...
several formats existed for fixed point binary numbers
How much RAM is 64K 36bit words of Core Memory 786
You are, of course, technically correct, and I note that you used "implies" so you obviously know this. But as a side comment for the OP, the term "core" became rather synonomis with "memory...
Systems with different word length-character length used different data formats.
Keep in mind that these systems were sloooooow by todays standards. In the time of 'core memory', some instructions required milliseconds to complete. Mbutt storage transfers into and out of memory was strictly sequential, batch processing required endless sorting of records, writing blocks of sorted records to tape, then merging, then resorting before processing could even begin. (Systems specifically designed and constructed for scientific calculation tended to be faster, to have larger 'core' memory storage, to be more expensive, and to have less I-O capability than 'business' systems.)
The equivalency you seek is not a simple number.
On one side is core memory; multiple microsecond cycle time (cycle because reading a core memory bit requires inverting, then restoration hugely expensive ~ $0.20 pet bit per month many different types of incompatible memory organization and data formats bigger than a breadbox ( ~ 30,000 bits)
on the other side is today's semiconductor memory; single digit nanosecond access time ~4,000,000 bits per penny the variety of memory organizations and data formats is much more standardized a 4,000,000,000 bit memory module the size of a pack of chicklets.
How much RAM is 64K 36bit words of Core Memory 787
I-O was relatively faster ... i've claimed that ckd (count-key-data) disk was trading off relatively abundant...
Also, in the epoch of core memory, great emphasis was placed on programing to reduce memory requirements even at the expense of end time because memory was so expensive and I-O so slow.
If you want a number just divide the number of bits in a word by 8 to get rough equivalence. It won't mean much, but there it is. As far as remember, 'core memory' had come and gone by the time the IBM System 360 popularized 'byte'. In fact, 'core memory' had a pretty short run.
Bytes and words and characters aren't directly equivalent.
Alt Folklore Computers Newsgroups