SQL server integer vs decimal SUM

16.1k views Asked by At

This is probably a newb question but...I was working on a db in SQSH and it had integer type values that were large (a population attribute). When I wanted to sum these up I got an arithmetic overflow exception. Then I cast the values to Decimal and everything worked OK (no overflow). However, everywhere I read says that Integer and Decimal are the same thing. So why is this happening ?

Thanks in advance

3

There are 3 answers

0
Sparky On BEST ANSWER

An integer type is a 4 byte number that can go as high as 2,147,483,647. A decimal can go substantially higher. Both are stored as integers, but the decimal allows for a value to represent digits past the decimal sign. You could also use BIGINT (9,223,372,036,854,775,807) in place of integer.

0
AudioBubble On

You have have a decimal datatype with the precision and scale that is greater than the max limit of the int datatype. For instance, decimal(18, 0) will have a larger capacity than int (which tops out at 2147483647).

Granted, int only takes up 4 bytes and you can't get away with more space consumption with something like decimal(18, 0). But it will get you aruond the max limit of int.

0
Igor Borisenko On

From Books OnLine

decimal and numeric

decimal[ (p[ , s] )] and numeric[ (p[ , s] )] Fixed precision and scale numbers. When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1.

int, bigint, smallint, and tinyint

Has a length of 4 bytes, and stores numbers from -2,147,483,648 through 2,147,483,647.

So, Decimal can store much greater value than Int.