I am a little confused on when to use what primitives. If I am defining a number, how do I know what to use byte
, short
, int
, or long
? I know they are different bytes, but does that mean that I can only use one of them for a certain number?
So simply, my question is, when do I use each of the four primitives listed above?
Depending on your use-case. There's no huge penalty to using an
int
as opposed to ashort
, unless you have billions of numbers. Simply consider the range a variable might use. In most cases it is reasonable to useint
, whose range is -2,147,483,648 to 2,147,483,647, whilelong
handles numbers in the range of +/- 9.22337204*1018. If you aren't sure,long
won't specifically hurt.The only reasons you might want to use
byte
specifically is if you are storing byte data such as parts of a file, or are doing something like network communication or serialization where the number of bytes is important. Remember that Java's bytes are also signed (-128 to 127). Same for short--might be useful to save 2GB of memory for a billion-element array, but not specifically useful for much other than, again, serialization with a specific byte alignment.No, you can use any that is large enough to handle that number. Of course, decimal values need a
double
or afloat
--double is usually ideal due to its higher precision and few drawbacks. Some libraries (e.g. 3D drawing) might use floats however. Remember that you can cast numeric types (e.g.(byte) someInt
or(float) functionReturningADouble();