Numpy is a library for efficient numerical arrays.
mpmath, when backed by gmpy, is a library for efficient multiprecision numbers.
How do I put them together efficiently? Or is it already efficient to just use a Numpy array with mpmath numbers?
It doesn't make sense to ask for "as efficient as native floats", but you can ask for it to be close to the efficiency of equivalent C code (or, failing that, Java/C# code). In particular, an efficient array of multi-precision numbers would mean that you can do vectorized operations and not have to look up, say, __add__
a million times in the Global Interpreter.
Edit: To the close voter: My question is about an efficient way of putting them together. The answer in the possible duplicate specifically points out that the naive approach is not efficient.
Having a numpy array of dtype=object can be a liitle misleading, because the powerful numpy machinery that makes operations with the standard dtypes super fast, is now taken care of by the default object's python operators, which means that the speed will not be there anymore
Disclaimer: I maintain
gmpy2
. The following tests were performed with the development version.a
andb
are 1000 element lists containing pseudo-randomgmpy2.mpfr
values with 250 bits of of precision. The test performs element-wise multiplication of the two lists.The first test uses a list comprehension:
The second test uses the
map
function to perform the looping:The third test is a C implementation of the list comprehension:
In the third attempt,
vector2
tries to be a well-behaved Python function. Numeric types are handled usinggmpy2
's type conversion rules, error-checking is done, etc. The context settings are checked, subnormal numbers are created if requested, exceptions are raised if needed, etc. If you ignore all the Python enhancements and assume all the value are alreadygmpy2.mpfr
, I was able to get the time down on the fourth attempt:The fourth version doesn't do enough error checking to be of general use but a version between the third and fourth attempts may be possible.
It is possible to decrease Python overhead, but as the precision increases, the effective savings decreases.