Hello SO. I just wrote my first semi-significant PC program, written purely for fun/to solve a problem, having not been assigned the problem in a programming class. I'm sure many of you remember the first significant program that you wrote for fun.
My issue is, I am not satisfied with the efficiency of my code. I'm not sure if it's the I/O limitation of my terminal or my code itself, but it seems to run quite slow for DAC resolutions of 8 bits or higher.
I haven't commented the code, so here is an explanation of the problem that I was attempting to solve with this program:
The output voltage of a DAC is determined by a binary number having bits Bn, Bn-1 ... B0, and a full-scale voltage.
The output voltage has an equation of the form:
Vo = G( (1/(2^(0)))*(Bn) + (1/2^(0+1))*(Bn-1) + ... + (1/2^(0+n))*(B0) )
Where G is the gain that would make an input B of all bits high the full-scale voltage.
If you run the code the idea will be quite clear.
My issue is, I think that what I'm outputting to the console can be achieved in much less than 108 lines of C++. Yes, it can easily be done by precomputing the step voltage and simply rendering the table by incrementation, but a "self-requirement" that I have for this program is that on some level it performs the series calculation described above for each binary represented input.
I'm not trying to be crazy with that requirement. I'd like this program to prove the nature of the formula, which it currently does. What I'm looking for is some suggestions on how to make my implementation generally cleaner and more efficient.
You could use Horner's method to evaluate the formula efficiently. Here's an example I use to demonstrate it for converting binary strings to decimal:
0.1101 = (1*2-1 + 1*2-2 + 0*2-3 + 1*2-4).
To evaluate this expression efficiently, rewrite the terms from right to left, then nest and evaluate as follows:
(((1 * 2-1 + 0) * 2-1 + 1) * 2-1 + 1) * 2-1 = 0.8125.
This way, you've removed the "pow" function, and you only have to do one multiplication (division) for each bit.
By the way, I see your code allows up to 128 bits of precision. You won't be able to compute that accurately in a double.