![]() ![]() The first two RISC chips were 32 bits, for whatever reason, and people had been conditioned to think that "more bits are better", so every manufacturer jumped on the 32 bit bandwagon. Right around the time one would expect a 24 bit CPU. Later, there were barely enough transistors for a 16 bit CPU, to a huge fanfare and "16 bit" marketing campaign. At first there were barely enough transistors for a 4 bit CPU, then a 8 bit CPU. (c) Wider bus widths in theory made a CPU faster, but putting the entire CPU on a single chip made it vastly cheaper and perhaps slightly faster than any previous multi-part CPU system of any bus width. Steel points out, that slight advantage is multiplied by economies of scale and market forces - more 8-bit-wide memories are used, and so economies of scale make them slightly cheaper, leading to even more 8-bit-wide memories being used in new designs, etc. (a) 8 bit addressable memory became popular because it was slightly more convenient for storing 7-bit ASCII and 4 bit BCD, without either awkward packing or wasting multiple bits per character and no other memory width had any great advantage. The Wikipedia "36-bit" article has more details on the relationship between 10 fingers and 36 bits, and links to articles on many other historically important but no longer popular bit sizes, most of them not a power of two. In particular, Seymour Cray and his team built many highly influential machines with non-power-of-two word sizes and address sizes - 12 bit, 48 bit, 60 bit, etc.Ī surprisingly large number of early computers had 36-bit words, entirely due to the fact that humans have 10 fingers. Many (most?) early pre-microprocessor CPUs have some number of bits per word that are not a power of two. ![]()
0 Comments
Leave a Reply. |