Interesting. I was trying to understand the difference between an octet and a byte. Both mean the same thing essentially (8-bits). However, octet is used to remove any ambiguity.

The byte, is typically used to describe storage units that vary in sizes. Octet, is common in telecommunications and networking contexts.

So, the difference between octet and byte is mostly in the usage and context. They both refer to eight bits, but octet is more specific and unambiguous than byte.

Reply to this note

Please Login to reply.

Discussion

on the PC platform it's tended to be

byte = 8 bits

word = 16 bits

double word = 32 bits

quad word = 64 bits

but on most other systems, a "word" is merely the native width of the data bus (as opposed to the address bus)

arguably, a "word" on the 8088 was 8 bits.

any chunk of data that would fit in the accumulator register of a CPU would be considered the "word" for a particular ISA.

but then "word" got locked in a bit, and the same with C types

try writing a C compiler where a "short int" isn't 16 bits wide, i.e. a i386/x86 word...

writing any kind of C code with large values or bitfields using types like "unsigned int" or "signed short int" instead of "uint32_t" or "int16_t" is risky...

like, most C compilers will do what you expect, so it's fine, but it's not particularly portable since the C standard doesn't actually offer all that many guarantees for much of anything.

Excellent background information here!