I'm running into a problem where as I have an implied unsigned hexadecimal number as a string, provided from user input, that needs to be converted into a BigInteger
.
Thanks to the signed nature of a BigInteger
any input where the highest order bit is set (0x8 / 1000b) the resulting number is treated as negative. This issue however can't be resolved by simply checking the sign bit and multiplying by -1 or getting the absolute value due to ones's complement which will not respect the underlying notation e.g. treating all values 0xF* as a -1.
As follows are some example input/output
var style = NumberStyles.HexNumber | NumberStyles.AllowHexSpecifier;
BigInteger.TryParse("6", style) == 6 // 0110 bin
BigInteger.TryParse("8", style) == -8 // 1000 bin
BigInteger.TryParse("9", style) == -7 // 1001 bin
BigInteger.TryParse("A", style) == -6 // 1010 bin
...
BigInteger.TryParse("F", style) == -1 // 1111 bin
...
BigInteger.TryParse("FA", style) == -6 // 1111 1010 bin
BigInteger.TryParse("FF", style) == -1 // 1111 1111 bin
...
BigInteger.TryParse("FFFF", style) == -1 // 1111 1111 1111 1111 bin
What is the proper way to construct a BigInteger
from an implied unsigned hexadecimal string?
Prefixing your hex string with a "0" should do it:
My BigInteger is 65535 in the example above.
Edit
Excerpt from the BigInteger documentation: