I am wondering if it is possible to store binary data in a number, and how the most binary data possible can be stored in a single number.
For example, let's say I want to store the following text in a number:
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec egestas nunc eget rhoncus blandit.
In binary form this is:
01001100 01101111 01110010 01100101 01101101 00100000 01101001 01110000 01110011 01110101 01101101 00100000 01100100 01101111 01101100 01101111 01110010 00100000 01110011 01101001 01110100 00100000 01100001 01101101 01100101 01110100 0101100 00100000 01100011 01101111 01101110 01110011 01100101 01100011 01110100 01100101 01110100 01110101 01110010 00100000 01100001 01100100 01101001 01110000 01101001 01110011 01100011 01101001 01101110 01100111 00100000 01100101 01101100 01101001 01110100 0101110 00100000 01000100 01101111 01101110 01100101 01100011 00100000 01100101 01100111 01100101 01110011 01110100 01100001 01110011 00100000 01101110 01110101 01101110 01100011 00100000 01100101 01100111 01100101 01110100 00100000 01110010 01101000 01101111 01101110 01100011 01110101 01110011 00100000 01100010 01101100 01100001 01101110 01100100 01101001 01110100 0101110
Now if I convert this to a number, I get: 2.15146353486 * 10^16
Converting this back to binary is where the problem lies, I get 00000010
.
Now obviously I don't know what I'm doing here, so please understand this is not a "why this no work?" question, what I am asking is, is what I want to do possible, and if so, how can it be done?
Since binary can be converted to ASCII or BASE-64 and vise-versa, it follows that converting to a number and back should work as well. After all, Base64 is basically a 64-based number system while decimal numbers are a 10 based system and binary is a two-based system.
Any advice would be appreciated.
Everything, just EVERYTHING in a computer is binary — zeroes and ones.
All of this:
are just human-understandable representations of some chunk of binary data.
A number, in its usual sense denoting a decimal number, has nothing to do with a text. It just makes no sense trying to represent a text as one huge decimal number.
Your conversion is wrong. The decimal digit 2, when expressed as a whole byte (=8 bits), is encoded as
00000010
. Hence you've omitted the “rest” of the number, plus you've converted just its most significant digit.Binary data is not stored in a (decimal) number. A number is just a representation of some chunk of binary data (in case of integers typically 1, 2, 4, or 8 bytes). So if you'd rephrase your question and asked a little bit more correctly “what would be the largest decimal number possible stored in binary data”, the answer would be: just any number, anyhow big, the only limit is available memory.