Baumann C.A simple and fast look-up table method to compute the Exp and Ln functions (2024)

A simple and fast look-up table method to compute the exp(x) and ln(x) functions

Claude Baumann, Director of the Boarding School „Convict Episcopal de Luxembourgi“, 5 avenue Marie-Thérèse, m.b. 913, L-2019 Luxembourgii claude.baumann@education.lu

July 29th 2004

Abstract

Normally today’s computers shouldn’t present any problems to rapidly calculate trigonometric or transcendental functions by any means. Developing better algorithms appears to be more sporty than really practical. However the growing number of embedded applications based on micro-controllers may ask for gain in speed and memory economy, since those applications are fundamentally characterized by limited resources and optimized designs in order to reduce costs. Thus, fast and memory economic algorithms keep playing an important role in this domain.

The present algorithm was developed during the elaboration of Ultimate ROBOLAB, an extension to the ROBOLAB softwareiii , known as a powerful graphical programming environment for the LEGO RCX. By difference to standard ROBOLAB that works with interpreted code, Ultimate ROBOLABiv directly generates Assembly code from the graphical code, compiles it to Hitachi H8 byte codes which are then downloaded to the RCX.

The algorithm has three important sections: the first one -universally knownconsists in reducing the function f(x)=exp(x) to f0(x)=2x - respectively g(x)=ln(x) to g0(x)=log2(x) - the second section concerns the arrangement of the numbers in the particular way to have the CPU only compute f0(x) or g0(x) for x [1,2[ and the third manages a fast computing of f0(x) and g0(x) to any desired precision according to a look-up table composed of 22 numbers only for IEEE 754 standard single precision floating point numbers. During the execution of the elementary functions, in any case the CPU has to operate less than 22 products. The average case turns around 11 multiplications.

The development of Ultimate ROBOLAB, a LabVIEW-based graphical compiler for Hitachi H8/300 code, is a particularly interesting demonstration of the typical constraints to which embedded design with micro-controllers may be submitted. Ultimate ROBOLAB is destined as a development tool for advanced users of the well-known LEGO RCX. This module had been invented as the central part of the LEGO Robotics Invention System (RIS), a highly sophisticated toy that, due to its excellent features, has found many applications as an educational tool in schools, high schools and even universities.

The heart of the RCX is a 16MHz clocked Hitachi H8/3292 micro-controller. Besides peripheral devices and 16kB on-chip ROM and 512 bytes RAM, this micro-controller encapsulates an H8/300 CPU that has eight 16-bit registers r0..r7 or sixteen 8-bit registers r0H, r0L, … r7H, r7L (r7 is used as the stack-pointer); 16-bit program counter; 8-bit condition code register; register-register arithmetic and logic operations, of which 8 or 16-bit add/subtract (125ns at 16MHz), 8.8-bit multiplying (875ns), 16÷8-bit division (875ns); concise instruction set (lengths 2 or 4 bytes); 9 different addressing modes. Together with 32k external RAM, LCD-display, buttons, infrared communication module, analog sensor ports and H-bridge output ports, the RCX is an ideal instrument for the exploration of microcontrollers in educational contexts.

-1-

Baumann C.A simple and fast look-up table method to compute the Exp and Ln functions (1)

In order to attribute most flexibility to the RCX for all kind of robot projects, the programming environment Ultimate ROBOLAB required a firmware kernel that should guarantee memory economy and execution speed. Thus each part of the kernel had to be optimized to the limits, so that users could dispose of a maximum of memory for their own code and sufficient reaction speed for reliable robot behaviors and closed loop controls. This was particularly challenging concerning the inclusion of an advanced mathematical kernel that comprised IEEE 754 standard single precision floating-point operations, among which the square root, trigonometric and exponential functions. Scrutinizing the subject with the target to find algorithms that could best balance the requirements rapidly ended in the choice of Heron’s algorithm for the square root and CORDICv for the trigonometric functions. However the exponential functions revealed themselves as more difficult.

1. Different methods of calculating y=exp(x) and y=ln(x)

In order to dispose of valuable algorithms for the exponential function and its reciprocal, several existing methods have been evaluated. An alternative CORDICvi algorithm using a hyperbolic atanh table instead of the atan value list was the first possibility that we tested applying the equations:

ln(x) = 2 a tanh x 1x +1

exp(x) = cosh(x) +sinh(x)

But the implementation of this solution led to an excessively long and complex code, because of a necessary additional algorithm to assure the convergence. Another alternative was an application of Taylor’s series:

ln(1+ x) = x

x2

+

x3

x4

+ −...

1 < x 1

2

3

4

e

x

=1

+ x +

x2

+

x3

+

x4

+

... +

xn

+...

2!

3!

4!

n!

While being easily programmable, these series present the disadvantage of converging only very slowly, leading either to unacceptably long computing time or unsatisfying inaccuracy.

Other high-speed alternatives known from FPGA or DSP implementations, and based on large constant tables had to be excluded as well. A remarkable representative of efficient table based algorithms was presented by DEFOUR et alvii, who succeeded in obtaining high precision function approximation with a multipartite method by performing 2 multiplications and adding 6 terms. The trade-off, and the reason for the disqualification in our application, is that the table size grows exponentially with the precision. For example, in order to get 23 correct bits, corresponding to an error of ½ ulp (units in the last place) in the case of IEEE 754 standard single precision floating-point data, the exp(x) function needs a table size of 82432 constants. Obviously such a table would explode the RCX memory capacity.

A better approximation choice was the use of a polynomial series based on error minimizing Chebycheffviii nodesix. For the purpose of y=2x, x [1,2[ , the applied function to determine 10 Chebycheff-points is:

-2-

Baumann C.A simple and fast look-up table method to compute the Exp and Ln functions (2)

x

1

i 0.5

i =1,..,10

i

=

3

+cos

π ,

2

10

These nodes serve as reference-points for a 9th order polynomial curve fitting. While applying the Horner Schemex to the polynomial, the number of operations is a total of 9 multiplications and 9 additions per function call, compared to 7.2 + 1=15 multiplications and 9 additions for the straightforward polynomial calculation.

P(x) = a

+ a x + a

2

x2

+ a

3

x3

+... + a

n

xn

1

= a0

+ x(a1 + x(a2

+ x(a3 +... + x(an1 + an x)))), Horner Scheme

The accuracy that can be obtained with the algorithm is about 3E-7, as can be seen in the result of a LabVIEW simulation (picture 2).

Picture 1: Operating the polynomial fit with LabVIEW 7.1 on the base of 10 Chebycheff nodes.

-3-

Baumann C.A simple and fast look-up table method to compute the Exp and Ln functions (3)

Picture 2: Absolute error of the previous polynomial approximation. (mse=0.33)

A similarly efficient polynomial approximation could be set up for the log2(x) function in the special case of x [1,2[ . Of course satisfying algorithms must be added to extend the functions to the ranges ]-,+[ respectively ]0,+[ .

The further investigation of the subject however led us to the algorithm that is described in this paper as a better solution for the purpose, combining memory economy and execution speed in the case of IEEE 754 standard single point precision floating-point representationxi.

2. Calculation of f(x)=exp(x)

2.1. Simplification to f0(x)=2x

For any real x we have:

ex = 2ln(12)

x

x

= 2ln(2)

This equation signifies that any x must be multiplied with the constant c=1/ln(2) before computing the f0 function.

Convention: we may suppose x>0, because of the trivial e0 =1 and e-x = 1/ex allowing an easy computation for any x.

2.2. Arranging of f0(x)=2x

The IEEE standard floating point representation transports three information-parts about the concerned number: the sign, the biased exponent and the fractional part (=mantissa).

Example: single precision representation

-4-

Baumann C.A simple and fast look-up table method to compute the Exp and Ln functions (4)

00

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

s

e

e

e

e

e

e

e

e

m

m

m

m

m

m

m

m

m

m

m

m

m

m

m

m

m

m

m

m

m

m

m

where :

if e=255 and m<>0, then x = NaN (“Not a number”)

if e=255 and m=0, then x = (-1) s *

if 0<e<255 then x = (-1)s*2(e-127) * (1.m) with (1.m) representing the binary number created by preceding m with a leading 1 and the binary point

if e=0 and m<>0, then x = (-1)s*2(-126) * (0.m) representing denormalized values if e=0 and m=0 and s=1, then x = -0

if e=0 and m=0 and s=0, then x = 0

The particular representation of the number 2.25 therefore is:

1.s = 0

2.e = 1 + 127 = 128 = b’10000000’

3.m = 0.125 = b’0010000 00000000 00000000’

Normalized mantissas always describe numbers belonging to the interval [1,2[ .

Convention: unbiased exponent = q (integer)

x = (-1)s*2q * (1.m)

Note: it is not possible to represent the number 0 in this form!

x > 0,

2x

= 2(2q (1.m)) = (21.m )(2q )

2.3 Study of f1(x = 1.m) = 21.m

x =1 20 + a 21

+ a

2

2

2 + a

3

23

+...,

a

i

{0,1}= bit representation of m

1

=1 + a1 0.5 + a2 0.25

+ a3 0.125 +...

2

x

= 2

(1+a1 0.5

+a2 0.25+a3 0.125+...)

= 21 2a1 0.5 2a2 0.25 2a3 0.125 ...

= 2 Baumann C.A simple and fast look-up table method to compute the Exp and Ln functions (5) 2a1 4 2a2 8 2a3 ...

n

= 2 2i 2ai

i =1

(formula 1)

In any case we have: 2(x-1) [1,2[, since m [0,1[ 2m [1,2[

limΠi 2i 2 = 2 or simpler:

i→∞

-5-

Thus the result 2(x-1) will always be normalized to the IEEE standard.

To fast compute 2(x-1), we only need to first set up a look-up table with the constant values of successive square-roots of 2, then operate consecutive multiplying of a selection of those numbers while scanning the mantissa m of x. The table is filled with values that have been previously yielded by any iterative means with a precision <0.5 ulp. (For instance in a practical micro-controller application, the values can be produced by the cross-compiler host and stored as constants in the micro-controllers ROM, EEPROM or even as part of the program-code). Due to the limitation of accuracy of the single precision representation, the LUT only needs 22 different values. Thus the last bit won’t add any information as can be seen in the following example. (For better comprehension, we added the value 2 to the table at index 0.)

Example:

Compute y = 21.171875

Look-up table (LUT):

index

2(2i )

2

11.41421353816986084

21.18920707702636719

31.09050774574279785

41.0442737340927124

51.02189719676971436

61.010889176330566

71.00542986392974853

81.00271129608154297

91.00135469436645508

101.00067710876464844

111.00033855438232422

121.00016927719116211

131.00008463859558105

141.00004231929779053

151.00002110004425049

161.00001060962677002

171.00000524520874023

181.00000262260437012

191.00000131130218506

201.0000007152557373

211.00000035762786865

221.00000011920928955

231,00000011920928955

(b’0000000 00000000 00000001’ is the smallest mantissa that can be represented in IEEE 754 single precision mode. The corresponding normalized number equals 1.0000001192… in decimal notation.)

IEEE mantissa (1.171875) = b’0010110 00000000 00000000’

21.171875

= LUT(0) . LUT(3) . LUT(5) . LUT(6)

-6-

Baumann C.A simple and fast look-up table method to compute the Exp and Ln functions (6)

=2 . 1.0905077 . 1.0218971 . 1.0108891

=2.2530429 (only considering the base-10 significant digits here)

In practice, in order to maintain the result normalized during the whole computation, it is advantageous of operating the multiplication by 2 only at the end of the calculations simply by increasing the exponent.

2.3.1. Accuracy and speed

As a consequence of the theorem postulating that the error η of successive floating-point multiplications may be expressed as:

η

(2n 1)ulp

xii

under the condition

that:

x

x

2

... x

n

1(2n 1)ulp

1

xmin

xi

xmax and

xmin

c1

x1 x2 ... xn

xmax c2

where c1 = 1(2n 2)ulp and c2 = (1(2n 2)ulp)

1(4n 4)ulp

We can conclude that η won’t grow out of an acceptable range. In fact the maximum relative error can be theoretically estimated for n=22 as 5.6E-6 and for n=12 as 2.7E-6. Nonetheless, the LabVIEW simulation gives a better result of about 8E-7 in the worst case. The simulation also reveals a growing error in function of x. (Note that the simulation compared the squareroot algorithm with the current LabVIEW (v.7.1) y=2x function that has been applied to extended precision numbers.)

Picture 3: A LabVIEW simulation reveals that the maximum absolute error may be considered as 8E-7. (mse=2.33)

Compared to the Chebycheff polynomial approximation, there is a non-negligible loss of accuracy. Another trade-off is the fact that the number of multiplications depends on the number of significant bits, whereas the polynomial algorithm keeps the number of operations constant. This certainly is an issue, if higher precision representations are chosen.

-7-

The computational worst case would be to operate a product with m = b’1111111 11111111 11111111’. This would require 22 multiplications (remember that the last bit may be ignored).

Since the numbers 0 and 1 have the same probability to appear in m, the average case only requires 11 products.

If we assume the empirical assertion that a floating-point multiplication needs about 5 times longer than an addition, the algorithm would be operated within the time of 11*5=55 additions, while the polynomial approximation would need 9*5+9=50 additions. Thus the average computing speed may be considered as sensibly equal for both algorithms.

However, a considerable speed gain can be obtained, if the algorithm is programmed at lowest level, where the access to the floating-point representation itself is possible. (This is practically required anyway, since the look-up-table items are selected on the mantissa bits!) One disadvantage of the polynomial algorithm is the fact that the multiplications produce unnormalized results that need to be re-expanded and also aligned for the additions. With the square-root algorithm, such normalizing procedures are superfluous, since it is guaranteed that after each multiplication the result is normalized. Thus, calling a primitive floating-point multiplying function without the normalizing procedure can substantially accelerate the execution.

Another possibility to reduce the computing time may be given, if less significant mantissabits are being considered. Indeed, it might be useful in some cases to gain speed instead of accuracy. In this case, the gain, compared to the Chebycheff polynomial approach may be remarkable.

2.4. The influence of the exponent q 0

Normally mathematicians would consider the exp(x) and specially the 2x functions differently than presented in this document, when extending the domain of definition from [1,2[ to ]-,+[ -or the portion of this interval that corresponds to the single precision representation.

In fact they’d prefer writing:

x real, r relative number, 2x = 2r +1.m = 2r 21.m , where r = floor(x) 1

At first sight, the operation looks like being very easy and fast, since only the f0 function must be executed and the result of that operation be multiplied by the rth power of two. The latter operation could be done simply and rapidly by increasing or decreasing the binary exponent by |r|. However there is a hook ! The issue is that the floor-function, representing the truncating of the floating point value, needs quite an impressive amount of execution steps, because it cannot be deduced by simple means from the IEEE 754 floating point representation. Therefore we propose a rather unorthodox approach that will end in a very reduced program code size.

2.4.1. First case : q > 0

-8-

2x = 2(2q (1.m ))

=(2(1.m ))(2q )

=(2(1.m ))2 2 2...

=

((2(1.m ))2 )2

2...

This equation doesn’t mean anything else but to compute successive squares of the 21.m value, each one representing a single multiplying.

2x rapidly reaches the limit of the IEEE floating point representation as can be seen in the following series :

q :

1

2

3

4

5

6

7

2(2q ) :

4

16

256

65536

4294967296

1.8446744E19

3.4028236E38

For example: the largest number that can be represented in single precision is 2127 . 1.999999 = 3.4E38. Thus the largest number x that can be computed to 2x is

log2(2128) =128 q = 7

Example:

Compute y = 29.375

The IEEE single precision representation of this number gives:

q = 3, 1.m=1.171875

y= ((21.171875 )2 )2 2

=2.25304298

=663.981852

Note that the relative error grows in a benign way with each multiplication, even if the 2nd condition of the cited theorem in 2.3.1. is no longer fulfilled.

-9-

Baumann C.A simple and fast look-up table method to compute the Exp and Ln functions (7)

2.4.2. Second case : q<0

2x = 2(2q (1.m )) = (2(1.m ))(2q )

=... (2(1.m ))

= 2( q ) (2(1.m ))

Replacing 2(1.m) according to formula 1, we get:

2x = 2( q ) 2 2a1 4 2a2 8

2a3

...

If q = -1, the equation may be rewritten:

2x

=

2 4 2a1

8 2a2

16 2a3 ...

If q = -2, it may be written as: 2x

= 4

2 8

2a1

16 2a2

32 2a3

...

a.s.o….

To compute the product for any q, we can operate a simple offset-shift in the look-up table only depending on the value of q.

Example:

Compute y = 20.146484375

The IEEE single precision representation of this number gives:

q = -3 and 1.m=1.171875

y = LUT(0+3) . LUT(3+3) . LUT(5+3) . LUT(6+3)

=1.0905077 . 1.0108891 . 1.0027112 . 1.0013546

=1.1068685

Note that q<-22, 2x will be considered a 1.

2.5. Practical implementation of the algorithm y=2x

Preliminary notes: For this sample the DELPHI language -former TURBO PASCALwas chosen because of the better overview concerning stacked ifs and for loops, than known from C++, even if no such compiler seems to exist for micro-controllers. The bulk of this section is to most clearly present an astute and short implementation of the algorithm that can be easily traduced to any current language. For simplicity, the “NaN”, “infinity” and “denormalized” handlers have been omitted. However they should be added to a real implementation in order

-10-

Baumann C.A simple and fast look-up table method to compute the Exp and Ln functions (2024)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Golda Nolan II

Last Updated:

Views: 6425

Rating: 4.8 / 5 (78 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Golda Nolan II

Birthday: 1998-05-14

Address: Suite 369 9754 Roberts Pines, West Benitaburgh, NM 69180-7958

Phone: +522993866487

Job: Sales Executive

Hobby: Worldbuilding, Shopping, Quilting, Cooking, Homebrewing, Leather crafting, Pet

Introduction: My name is Golda Nolan II, I am a thoughtful, clever, cute, jolly, brave, powerful, splendid person who loves writing and wants to share my knowledge and understanding with you.