Who defines the sign inversion pattern for integers?

151 views Asked by At

Twos complement means that simply inverting all bits of a number i gets me -i-1:

~0 is -1

~01000001 is 10111110

~65 is -66

etc. To switch the sign of an integer, I have to use the actual minus sign.

int i = 65; int j = -i; 
cout << j; // -65

Where is that actual behavior defined, and whose responsibility is it to ensure the twos complement pattern (to make a number negative invert all bits and add 1) is followed? I don't even know if this is a hardware or compiler operation.

3

There are 3 answers

0
Barmar On BEST ANSWER

It's usually done by the CPU hardware.

Some CPUs have an instruction for calculating the negative of a number. In the x86 architecture, it's the NEG instruction.

If not, it can be done using the multiplication operator, multiplying the number by -1. But many programmers take advantage of the identity that you discovered, and complement the number and then add 1. See

How to convert a positive number to negative in assembly

0
Has QUIT--Anony-Mousse On

the reasons for this are simple: consistency with 0 and addition.

You want addition to work the same for positive and negative numbers without special cases... in particulary, incrementing -1 by 1 must yield 0.

The only bit sequence, where the classic overflowing increment produces the value of 0 is the all-1 bit sequence. If you increment by 1, you get all zeros. So that is your -1: all 1s, i.e. bitwise negation of 0. Now we have (assuming 8 bit integers, incrementing by 1 each line)

-2: 11111110 = ~1
-1: 11111111 = ~0
 0: 00000000 = ~-1
+1: 00000001 = ~-2

If you don't like this behavior, you need to handle special cases in addition, and you'll have +0 and -0. Most likely, such a CPU would be a lot slower.

If your question is how

int i = -j;

is implemented, that depends on your compiler and CPU and optimization. Usually it will be optimized together with other operations you specify. But don't be surprised if this ends up being performed as

int i = 0 - j;

Since this probably takes 1-2 cpu ticks to compute (e.g. as one XOR or a register onto itself to get 0, then a SUB operation to compute 0-j), it will barely ever be a bottleneck. Loading j and storing the result i somewhere in memory will be a lot lot lot more expensive. In fact, some CPUs (MIPS?) even have a built-in register that is always zero. Then you don't need a special instruction for negation, you simply subtract j from $zero in usually 1 tick. Current Intel CPUs are said to recognize such xor operarions and do them in 0 ticks, with register renaming optimizations (i.e. they let the next instruction use a new register that is zero). You have neg on amd64, but a fast xor rax,rax is useful in other situations, too.

1
M.M On

C arithmetic is defined in terms of values. When the code is:

int i = 65; 
int j = -i; 

the compiler will emit whatever CPU instructions are required to give j the value of -65, regardless of the bit representation.

Historically, not all systems used 2's complement. The C compiler would choose a system of negative numbers that leads to the most efficient output for the target CPU, based on the CPU's capabilities.

However 2's complement is a very common choice because it leads to the simplest algorithms for doing arithmetic. For example the same instruction can be used for + - * with signed and unsigned integers.