I have a 64-bit linux system. I compile and run my Fortran code using gfortran and this outputs some number to double precision i.e. ~ 16 decimal places. e.g.
gfortran some_code.f -o executable1
./executable1
10.1234567898765432
If I now compile with the flag -fdefault-real-8, the Fortran double precision type is promoted to 16 bytes = 128 bit and some number is output, but to a higher precision ~33 decimal places. e.g.
gfortran -fdefault-real-8 some_code.f -o executable2
./executable2
10.12345678987654321234567898765432
My question is: how can this computation have been done to such a high precision if my computer is only 64 bit?
First, the fact that your CPU is 64-bit means that it uses 64-bit pointers (memory addresses). It has nothing at all to do with floating point variable sizes. 32-bit CPUs (and even 16-bit!) used 64-bit floating point numbers and integers just fine.1
128-bit floating point numbers are implemented in software, it is a sort of "emulation" of a 128-bit floating point processor unit and it is actually very slow. This is not because your CPU is 64-bit, but because the floating point unit of the CPU only implements 64-bit floating point arithmetic. That was the same even in Intel's 32-bit CPU's.
The library where the 128-bit computations are implemented for GCC is libquadmath.
1Actually, floating point bit operations are done in the floating point unit (FPU). It used to be a special chip separated from the CPU, but now they are always integrated in Intel consumer processors. In the old days if you did not buy the separate FPU, all floating point arithmetic was emulated and slow.