How signed integers are represented using 16 bits



This probably requires a little explanation with the nuances of how integers are stored in computers. A 16-bit signed integer simulates binary digits with 16 digits, where the first digit represents either positive (0) or negative (1). Because this first digit represents positive or negative, the counting system is a little different than regular counting in binary.

For starters,

0000000000000000 represents 0

0000000000000001 represents 1

0000000000000010 represents 2


0111111111111111111 represents 2^{16}-1 = 32767

For the next number, 1000000000000000, there’s a catch. The first 1 means that this should represent a negative number. However, there’s no need for this to stand for -0, since we already have a representation for 0. So, to prevent representing the same number twice, we’ll say that this number represents 0 - 32768 = -32768, and we’ll follow this rule for all representations starting with 1. So

0000000000000000 represents 0-32768 = -32768

0000000000000001 represents 1 - 32768 = -32767

0000000000000010 represents 2 - 32768 = -32766


0111111111111111111 represents 32767-32768 = -1

Because of this nuance, the following C computer program will result in the unexpected answer of -37268 (symbolized by the sheep going backwards in the comic strip).



      short x = 32767;

      printf(“%d \n”, x + 1);


For more details, see

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.