##### The Rationality of Irrationality

Rationalism is a sociological philosophy which, in a general sense, eschews the understanding of reality through mystery and magic. This has parallels with computing where results can often seem magical. This is doubly true when dealing with real-world numbers: although the hardware seems ill equipped to express real numerical values complex calculations are nonetheless possible.

A computer represents a number as a finite string of 1s and 0s. On modern computers, this string is typically 64 bits long. This means that the domain of natural numbers that can be expressed in this system is countable and finite with an upper value of \(2^{63}-1\).

Unlike natural numbers, Reals are not countable which limits their domain; the subset of which can be represented by finit bit strings. One approach is to use quantization. For example, each increment of a binary value could correspond to a Real value of 0.25. Therefore the bit sequence \(b101\) would represent the Real value 1.25.

Another approach is to use Rational numbers, which the a subset of the Real numbers that can be expressed as the quotient of two natural numbers: the dividend and the divisor. Therefore the rational numbers, within a given range, are also finite and countable.

Rational numbers cover a good portion of the range of Real values, however, Irrational numbers are also Reals and are a little bit tricker to express since their representation requires an infinite number of digits. Therefore the best that can be achieved is an approximation of irrational numbers. This means that their use in calculations will introduce errors due to lack of precision. To overcome this limitation, the values can be approximated on demand, and iteratively refined to obtain the desired precision. However, this will add steps to the calculation which slows down the entire process.

Evidently, there are many pitfalls in using integer values to express
Real data. Real values are not countable and can therefore not be
expressed using countable quantities such as integers. “Squeezing
infinitely many real numbers into a finite number of bits requires an
approximate representation.”^{1} The floating-point standard (IEEE
754-2008^{2}) offers a good middle ground for dealing with Real data
all while respecting the storage constraints of the computer’s
registers. However, the floating-point numbers can still only express
a subset of the Real values; the Rational numbers that can be
represented as a pair of integers. This leaves out irrational numbers,
such as \(\pi\) and \(e\), which are important for calculating physical
quantities which must be approximated; preferably in the most
efficient way possible. An efficient procedure for approximating Real
values with a very high precision is of primal importance. Especially
if these values are used frequently.

## Approximately Polynomial

One way to approximate Real numbers to a given precision is by
representing it as a polynomial; a series of terms that involve only
terms that can easily be expressed by a computer. For example, the
trigonometric functions, \(\sin\) and \(\cos\), can be approximated using
the Taylor polynomial^{3} evaluated around zero. The value of \(\cos
x\) is evaluated to a precision of \(\cos{\tilde{\mathcal{E}}(x)}\) using
the following series of terms:

\[ \cos x = 1 – \frac{1}{2}x^2 + \frac{1}{24}x^4\cos{\tilde{\mathcal{E}}(x)} \]

This is the expansion of the first three terms of the Taylor series.

Deriving the Taylor polynomial requires knowledge about the n-th derivative of the function being approximated. Fortunately, the derivatives of trigonometric functions have natural results for given values of \(x\). Moreover these values are periodic with a period of 4; meaning that the 4-th derivative has the same value as the original function. Therefore the derivatives of \(\sin\) and \(\cos\) can be implemented using a lookup table:

Derivative | SIN | COS |

0 | 0 | 1 |

1 | 1 | 0 |

2 | 0 | -1 |

3 | -1 | 0 |

Given that the relevant values of the various derivatives of the trigonometric functions are known, the result of applying one of these functions to an arbitrary number can be approximated using the Taylor series.

The following assembly code defines the `__taylor`

function that evaluates the n-th Taylor polynomial to approximate a periodic function for a given value \(x\).

NOTE: The`__taylor`

function defines aliases for certain register names to make their usage clear.

```
#define n t0
#define i t3
#define nfact t2
#define x ft0
#define xn ft1
__taylor:
function_prologue 0
fmv.d x, fa0
fmv.d xn, x
mv n, zero // n <- 0
mv i, zero // i <- 0
lb t1, 0(a0)
fcvt.d.l fa0, t1 // fa0 <- f(0)
addi nfact, n, 1 // (n+1)! <- 1
1:
addi n, n, 1 // n <- n+1
addi i, i, 1 // i <- i+1
mul nfact, n, nfact // n! <- n*(n-1)!
fcvt.d.l ft2, nfact // ft2 <- n!
li t1, 4
blt i, t1, 2f
remu i, i, t1
2: // f^(n)(0): n-th derivative of f(0)
add t1, a0, i
lb t1, 0(t1) // t1 <- f^(n)(0)
beqz t1, 3f // Short-circuit if f^(n)(0) = 0
fcvt.d.l ft3, t1
fdiv.d ft3, ft3, ft2 // ft3 <- f^(n)(0)/n!
// fa0 <= f + (x^n)*f^(n)(0)/n!
fmadd.d fa0, xn, ft3, fa0
3:
fmul.d xn, xn, x // xn <- x*x^(n)
blt n, a1, 1b // while n < a1
mul nfact, nfact, n
fcvt.d.l ft2, nfact
fdiv.d fa1, xn, ft2 // fa1 <- residual
function_epilogue 0
ret
```

Several conversion instructions are used to setup the floating-point factors used to calculate the Taylor polynomial. For example, the running value for \(n!\) is calculated using integer instructions since its value will always be a whole number. However, it is converted to a double-precision floating-point value on line 19 (the `fcvt.d.l`

instruction) in order to be used as an operand for floating-point arithmetic.

The `__taylor`

function will use the lookup table to fetch the n-th derivative of the function on line 31. If the value of the derivative is zero, it will skip the current term. Otherwise the value is converted to a floating-point number (on line 33), divided by \(n!\), and scaled by the n-th power of \(x\) and added to the running sum. The `fmadd.d`

instruction on line 35 is a pseudo instruction that will multiply two numbers, then add the product to a third.

The \(\cos{\tilde{\mathcal{E}}(x)}\) term in the expansion of the Taylor series for \(cos x\) represents the remainder error. As the polynomial is expanded to include more terms, this error value will become smaller (meaning the apprximation will be closer to the actual value). Since \(\left|\cos{\tilde{\mathcal{E}}(x)}\right| \le 1\) for all \(x\), and the largest possible value of \(cos x\) is 1, we have

\[ \left|\frac{1}{(n+1)!}x^{n+1}\cos{\tilde{\mathcal{E}}(x)}\right| \le \frac{1}{(n+1)!}x^{n+1} \]

The upper bound of the remainder in the approximation will therefore be \(\frac{1}{(n+1)!}x^{n+1}\). The code to calculate the error starts on line 37 prior to returning.

## COS I Said So

The `__taylor`

function defined previously can be used to approximate the cosine of a given angle. The following RISC-V assembly code implements this approximation:

```
__cos:
addi sp, sp, 24
sd ra, 16(sp)
sd a0, 8(sp)
sd a1, 0(sp)
fld fa0, 0(a0) // fa0 <- x
la a0, cos_lut
fclass.d a1, fa0
andi a1, a1, 0x18 // Check for zero
beqz a1, 1f
lb a1, 0(a0) // if x = 0; load from lut
fcvt.d.l fa0, a1
j 2f
1:
li a1, 7 // else; use the 7-th taylor polynomial.
call __taylor
2:
ld a1, 0(sp)
ld a0, 8(sp)
ld ra, 16(sp)
fsd fa0, 0(a1)
ret
```

It’s important to note that for the simple cases, the Taylor polynomial is not used since the exact value is known. On line 8 of the code listing, the `fclass.d`

instruction is used to determine whether the class of the double-precision value that was loaded into register `fa0`

. If its value is 0, one of bits 3 or 4 of the target register, `a1`

in this case, will be asserted indicating that it is either positive or negative zero (which are effectively the same but my have different semantic meanings).

If the angle provided to the `__cos`

function is in fact zero, the lookup table will be consulted to find the exact value of the cosine of the angle; the table is referenced by the symbol `cos_lut`

. Otherwise the 7-th Taylor polynomial is used to approximate its value. The `__taylor`

function is invoked with the address of the `cos_lut`

table, the order of the polynomial, and the value of the angle \(x\) as its arguments.

# Conclusion

Although the base RISC-V instructions allow for integer arithmetic only, the floating-point extension provides the ability to manipulate rational numbers as well. This is useful for carrying out calculations for the real world. However, this still precludes irrational numbers which are important quantities if, say for example, we wish to calculate the cosine of a given angle.

A lookup table can be used to determine pre-computed values for certain functions which would otherwise be diffcult to express. In fact, many mathematics textbooks include such look-up tables in their appenices. However pre-computing values requires knowing which values will need to be computed.

A better approach is to approximate these values using the Taylor series. However, this increases the time complexity of calculations as precision improves. Therefore it is important that this function be carried out as efficiently as possible.

In this article, we looked at a RISC-V assembly implementation of a
function to calculate the Taylor polynomial. This function was used to
approximate the cosine of an angle \(x\), expressed in radians, to some
arbitrary precision. Although only we only looked at \(cos x\) the same
procedure can also be used to evaluate \(sin x\) (using a different
lookup table). The code for this example is available in a
BitBucket^{4} repository. This code also implements the \(sin\)
function using the same technique.

This implementation of the Taylor polynomial can also be re-used to approximate any function whose derivatives are periodic. The only pre-condition is to provide a lookup table to determine the values of the n-th derivative. Play around with the code to see if you can approximate other functions, it’s not as RISC-y as it seems.

### Footnotes

^{1} David Goldberg. 1991. What every computer scientist should know about floating-point arithmetic. ACM Comput. Surv. 23, 1 (March 1991), 5–48. https://doi.org/10.1145/103162.103163

^{2} https://web.archive.org/web/20180419150129/http://grouper.ieee.org/groups/754/

^{3} https://en.wikipedia.org/wiki/Taylor_series

^{4} https://bitbucket.org/marc_perron/risc-v/src/mainline/

Filed under: RISC V - @ 2023-07-21 23:58